Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/115.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 如何使AKSequencer切换soundfonts?_Ios_Swift_Swift3_Audiokit - Fatal编程技术网

Ios 如何使AKSequencer切换soundfonts?

Ios 如何使AKSequencer切换soundfonts?,ios,swift,swift3,audiokit,Ios,Swift,Swift3,Audiokit,我正在使用AudiokitAPI创建一个函数,用户将音符按到屏幕上,然后根据他们选择的声音字体发出声音。然后我允许他们收集大量笔记,让他们按照自己选择的顺序播放。 问题是我正在使用AKSequencer回放音符,而当AKSequencer回放音符时,它听起来从来不像SoundFont。它发出嘟嘟声。 是否有代码可以让我更改AKSequencer发出的声音 我用音频套件来做这个 示例是一个NSObject,其中包含MIDI采样器、播放器等 class Sampler1: NSObject

我正在使用
Audiokit
API
创建一个函数,用户将音符按到屏幕上,然后根据他们选择的声音字体发出声音。然后我允许他们收集大量笔记,让他们按照自己选择的顺序播放。 问题是我正在使用
AKSequencer
回放音符,而当
AKSequencer
回放音符时,它听起来从来不像SoundFont。它发出嘟嘟声。 是否有代码可以让我更改AKSequencer发出的声音

我用音频套件来做这个

示例是一个
NSObject
,其中包含MIDI采样器、播放器等

    class Sampler1: NSObject {
    var engine = AVAudioEngine()
    var sampler: AVAudioUnitSampler!
    var midisampler = AKMIDISampler()
    var octave                = 4
    let midiChannel           = 0
    var midiVelocity          = UInt8(127)
    var audioGraph:     AUGraph?
    var musicPlayer: MusicPlayer?
    var patch           = UInt32(0)
    var synthUnit:      AudioUnit?
    var synthNode       = AUNode()
    var outputNode      = AUNode()

    override init() {
        super.init()
     //   engine = AVAudioEngine()
        sampler = AVAudioUnitSampler()

        engine.attach(sampler)
        engine.connect(sampler, to: engine.mainMixerNode, format: nil)
        loadSF2PresetIntoSampler(5)
      /*   sampler2 = AVAudioUnitSampler()
        engine.attachNode(sampler2)
        engine.connect(sampler2, to: engine.mainMixerNode, format: nil)
       */
        addObservers()

        startEngine()

        setSessionPlayback()
      /*  CheckError(NewAUGraph(&audioGraph))
        createOutputNode(audioGraph: audioGraph!, outputNode:       &outputNode)
        createSynthNode()
        CheckError(AUGraphNodeInfo(audioGraph!, synthNode, nil,   &synthUnit))
        let synthOutputElement: AudioUnitElement = 0
        let ioUnitInputElement: AudioUnitElement = 0
        CheckError(AUGraphConnectNodeInput(audioGraph!, synthNode, synthOutputElement,
                                    outputNode, ioUnitInputElement))
        CheckError(AUGraphInitialize(audioGraph!))
        CheckError(AUGraphStart(audioGraph!))
        loadnewSoundFont()
        loadPatch(patchNo: 0)*/
        setUpSequencer()

    }
    func createOutputNode(audioGraph: AUGraph, outputNode: UnsafeMutablePointer<AUNode>) {
        var cd = AudioComponentDescription(
            componentType: OSType(kAudioUnitType_Output),
            componentSubType: OSType(kAudioUnitSubType_RemoteIO),
            componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
            componentFlags: 0,componentFlagsMask: 0)
        CheckError(AUGraphAddNode(audioGraph, &cd, outputNode))
    }
    func loadSF2PresetIntoSampler(_ preset: UInt8) {
        guard let bankURL = Bundle.main.url(forResource: "Arachno SoundFont - Version 1.0", withExtension: "sf2") else {
            print("could not load sound font")
            return
        }
        let folder = bankURL.path

        do {
            try self.sampler.loadSoundBankInstrument(at: bankURL,
                                                     program: preset,
                                                     bankMSB: UInt8(kAUSampler_DefaultMelodicBankMSB),
                                                     bankLSB: UInt8(kAUSampler_DefaultBankLSB))

            try midisampler.loadSoundFont(folder, preset: 0, bank: kAUSampler_DefaultBankLSB)
        //  try midisampler.loadPath(bankURL.absoluteString)
        } catch {
            print("error loading sound bank instrument")
        }

    }
    func createSynthNode() {
        var cd = AudioComponentDescription(
            componentType: OSType(kAudioUnitType_MusicDevice),
            componentSubType: OSType(kAudioUnitSubType_MIDISynth),
            componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
            componentFlags: 0,componentFlagsMask: 0)
        CheckError(AUGraphAddNode(audioGraph!, &cd, &synthNode))
    }
    func setSessionPlayback() {
        let audioSession = AVAudioSession.sharedInstance()
        do {
            try
                audioSession.setCategory(AVAudioSession.Category.playback, options:
                    AVAudioSession.CategoryOptions.mixWithOthers)
        } catch {
            print("couldn't set category \(error)")
            return
        }

        do {
            try audioSession.setActive(true)
        } catch {
            print("couldn't set category active \(error)")
            return
        }
    }
    func startEngine() {
        if engine.isRunning {
            print("audio engine already started")
            return
        }

        do {
            try engine.start()
            print("audio engine started")
        } catch {
            print("oops \(error)")
            print("could not start audio engine")
        }
    }

    func addObservers() {
        NotificationCenter.default.addObserver(self,
                                               selector:"engineConfigurationChange:",
                                               name:NSNotification.Name.AVAudioEngineConfigurationChange,
                                               object:engine)

        NotificationCenter.default.addObserver(self,
                                               selector:"sessionInterrupted:",
                                               name:AVAudioSession.interruptionNotification,
                                               object:engine)

        NotificationCenter.default.addObserver(self,
                                               selector:"sessionRouteChange:",
                                               name:AVAudioSession.routeChangeNotification,
                                               object:engine)
    }

    func removeObservers() {
        NotificationCenter.default.removeObserver(self,
                                                  name: NSNotification.Name.AVAudioEngineConfigurationChange,
                                                  object: nil)

        NotificationCenter.default.removeObserver(self,
                                                  name: AVAudioSession.interruptionNotification,
                                                  object: nil)

        NotificationCenter.default.removeObserver(self,
                                                  name: AVAudioSession.routeChangeNotification,
                                                  object: nil)
    }

    private func setUpSequencer() {
        // set the sequencer voice to storedPatch so we can play along with it using patch
        var status = NewMusicSequence(&musicSequence)
        if status != noErr {
            print("\(#line) bad status \(status) creating sequence")
        }

        status = MusicSequenceNewTrack(musicSequence!, &track)
        if status != noErr {
            print("error creating track \(status)")
        }

        // 0xB0 = bank select, first we do the most significant byte
        var chanmess = MIDIChannelMessage(status: 0xB0 | sequencerMidiChannel, data1: 0, data2: 0, reserved: 0)
        status = MusicTrackNewMIDIChannelEvent(track!, 0, &chanmess)
        if status != noErr {
            print("creating bank select event \(status)")
        }
        // then the least significant byte
        chanmess = MIDIChannelMessage(status: 0xB0 | sequencerMidiChannel, data1: 32, data2: 0, reserved: 0)
        status = MusicTrackNewMIDIChannelEvent(track!, 0, &chanmess)
        if status != noErr {
            print("creating bank select event \(status)")
        }

        // set the voice
        chanmess = MIDIChannelMessage(status: 0xC0 | sequencerMidiChannel, data1: UInt8(0), data2: 0, reserved: 0)
        status = MusicTrackNewMIDIChannelEvent(track!, 0, &chanmess)
        if status != noErr {
            print("creating program change event \(status)")
        }

        CheckError(MusicSequenceSetAUGraph(musicSequence!, audioGraph))
        CheckError(NewMusicPlayer(&musicPlayer))
        CheckError(MusicPlayerSetSequence(musicPlayer!, musicSequence))
        CheckError(MusicPlayerPreroll(musicPlayer!))
    }
    func loadnewSoundFont() {
        var bankURL = Bundle.main.url(forResource:  "Arachno SoundFont - Version 1.0", withExtension: "sf2")
        CheckError(AudioUnitSetProperty(synthUnit!, AudioUnitPropertyID(kMusicDeviceProperty_SoundBankURL), AudioUnitScope(kAudioUnitScope_Global), 0, &bankURL, UInt32(MemoryLayout<URL>.size)))
    }
    func loadPatch(patchNo: Int) {
        let channel = UInt32(0)
        var enabled = UInt32(1)
        var disabled = UInt32(0)
        patch = UInt32(patchNo)

        CheckError(AudioUnitSetProperty(
            synthUnit!,
            AudioUnitPropertyID(kAUMIDISynthProperty_EnablePreload),
            AudioUnitScope(kAudioUnitScope_Global),
            0,
            &enabled,
            UInt32(MemoryLayout<UInt32>.size)))

        let programChangeCommand = UInt32(0xC0 | channel)
        CheckError(MusicDeviceMIDIEvent(self.synthUnit!, programChangeCommand, patch, 0, 0))

        CheckError(AudioUnitSetProperty(
            synthUnit!,
            AudioUnitPropertyID(kAUMIDISynthProperty_EnablePreload),
            AudioUnitScope(kAudioUnitScope_Global),
            0,
            &disabled,
            UInt32(MemoryLayout<UInt32>.size)))

        // the previous programChangeCommand just triggered a preload
        // this one actually changes to the new voice
        CheckError(MusicDeviceMIDIEvent(synthUnit!, programChangeCommand, patch, 0, 0))
    }

    func play(number: UInt8) {
        sampler.startNote(number, withVelocity: 127, onChannel: 0)
    }

    func stop(number: UInt8) {
        sampler.stopNote(number, onChannel: 0)
    }
    func musicPlayerPlay() {
        var status = noErr
        var playing:DarwinBoolean = false
        CheckError(MusicPlayerIsPlaying(musicPlayer!, &playing))
        if playing != false {
            status = MusicPlayerStop(musicPlayer!)
            if status != noErr {
                print("Error stopping \(status)")
                CheckError(status)
                return
            }
        }

        CheckError(MusicPlayerSetTime(musicPlayer!, 0))
        CheckError(MusicPlayerStart(musicPlayer!))
    }



    var avsequencer: AVAudioSequencer!
    var sequencerMode = 1
    var sequenceStartTime: Date?
    var noteOnTimes = [Date] (repeating: Date(), count:128)
    var musicSequence: MusicSequence?
    var midisequencer = AKSequencer()
    //  var musicPlayer: MusicPlayer?
    let sequencerMidiChannel = UInt8(1)
    var midisynthUnit: AudioUnit?

    //track is the variable the notes are written on
    var track: MusicTrack?
    var newtrack: AKMusicTrack?


    func setupSequencer(name: String) {

        self.avsequencer = AVAudioSequencer(audioEngine: self.engine)
        let options = AVMusicSequenceLoadOptions.smfChannelsToTracks

        if let fileURL = Bundle.main.url(forResource: name, withExtension: "mid") {
            do {
                try avsequencer.load(from: fileURL, options: options)
                print("loaded \(fileURL)")
            } catch {
                print("something screwed up \(error)")
                return
            }
        }
        avsequencer.prepareToPlay()
    }

    func playsequence() {
        if avsequencer.isPlaying {
            stopsequence()
        }

        avsequencer.currentPositionInBeats = TimeInterval(0)

        do {
            try avsequencer.start()
        } catch {
            print("cannot start \(error)")
        }
    }


    func creatnewtrck(){
        let sequencelegnth = AKDuration(beats: 8.0)
        newtrack = midisequencer.newTrack()

    }
    func addnotestotrack(){
       // AKMIDISampler
    }
    func stopsequence() {
        avsequencer.stop()
    }

    func setSequencerMode(mode: Int) {
        sequencerMode = mode
        switch(sequencerMode) {
        case SequencerMode.off.rawValue:
            print(mode)
         //   CheckError(osstatus: MusicPlayerStop(musicPlayer!))
        case SequencerMode.recording.rawValue:
            print(mode)

        case SequencerMode.playing.rawValue:
            print(mode)

        default:
            break
        }
    }
     /*   func noteOn(note: UInt8) {
        let noteCommand = UInt32(0x90 | midiChannel)
        let base = note - 48
        let octaveAdjust = (UInt8(octave) * 12) + base
        let pitch = UInt32(octaveAdjust)

        CheckError(MusicDeviceMIDIEvent(self.midisynthUnit!,
                                                  noteCommand, pitch, UInt32(self.midiVelocity), 0))
    }

    func noteOff(note: UInt8) {
        let channel = UInt32(0)
        let noteCommand = UInt32(0x80 | channel)
        let base = note - 48
        let octaveAdjust = (UInt8(octave) * 12) + base
        let pitch = UInt32(octaveAdjust)

        CheckError(MusicDeviceMIDIEvent(self.midisynthUnit!,
                                                  noteCommand, pitch, 0, 0))
    }*/

         func noteOn(note: UInt8) {
          if sequencerMode == SequencerMode.recording.rawValue {
            print("recording sequence note")
            noteOnTimes[Int(note)] = Date()
          } else {
            print("no notes")
          }
           }

      func noteOff(note: UInt8, timestamp: Float64, sequencetime: Date) {
        if sequencerMode == SequencerMode.recording.rawValue {
            let duration: Double = Date().timeIntervalSince(noteOnTimes[Int(note)])
            let onset: Double = noteOnTimes[Int(note)].timeIntervalSince(sequencetime)
            //the order of the notes in the array
            var beat: MusicTimeStamp = 0

            CheckError(MusicSequenceGetBeatsForSeconds(musicSequence!, onset, &beat))
            var mess = MIDINoteMessage(channel: sequencerMidiChannel,
                                       note: note,
                                       velocity: midiVelocity,
                                       releaseVelocity: 0,
                                       duration: Float(duration) )
            CheckError(MusicTrackNewMIDINoteEvent(track!, timestamp, &mess))
        }
    }
}




The code that plays the collection of notes


        _ = sample.midisequencer.newTrack()

        let sequencelegnth = AKDuration(beats: 8.0)
        sample.midisequencer.setLength(sequencelegnth)
    sample.sequenceStartTime = format.date(from: format.string(from: NSDate() as Date))

       sample.midisequencer.setTempo(160.0)

       sample.midisequencer.enableLooping()
       sample.midisequencer.play()


MIDI采样器是AKMIDI采样器。

至少,您需要将
AKSequencer
连接到某种输出,以使其发出声音。对于旧版本(现在称为
AKAppleSequencer
),如果不显式设置输出,您将听到默认(beepy)采样器

例如,在
AKAppleSequencer
(在AudioKit 4.8中,或
AKSequencer
早期版本中)

在新的
AKSequencer

let track = seq.newTrack()  // for the new AKSequencer, in AudioKit 4.8
track!.setTarget(node: sampler)
另外,确保在项目的功能中允许音频背景模式,因为缺少此步骤,这也将获得默认的采样器


您已经包含了大量的代码(我还没有尝试吸收这里的所有内容),但是您正在使用
MusicSequence
AKSequencer
(我怀疑这是较旧的版本,现在称为
AKAppleSequencer
,它仅仅是
音乐序列
的包装器)这是一个危险信号。

谢谢你。我会尝试一下,然后再给你回复。我最近一直在运行。这段代码有效。我只需要将曲目的输出设置为播放声音字体的相同输出。我将在几个小时后显示代码。我创建了名为Mixer和Filter的新变量“private var Mixer:AKMixe”r“private var filter:AKMoogLadder?”然后我将midisampler添加到混音器中。然后我将混音器添加到筛选器中。最后,我将Audiokit输出设置到筛选器中。mixer=AKMixer(sample.midisampler)filter=AKMoogLadder(mixer)filter?.cutoffFrequency=20_000 AudioKit.output=filter'最后,我将曲目的MIDI输出设置为midisampler'sample.midisequencer.tracks[0]。setMIDI输出(sample.midisampler.midin)'
let track = seq.newTrack()  
track!.setMIDIOutput(sampler.midiIn) 
let track = seq.newTrack()  // for the new AKSequencer, in AudioKit 4.8
track!.setTarget(node: sampler)