C# 听力记忆增强UWP
我制作了两个听力图,一个用于计算机语音,一个用于麦克风。在这两种情况下,一旦图形启动,即使在计算机语音图形的情况下,即使在句子完成后,内存也会继续增加。即使在麦克风图形中,当图形启动时,内存也会继续增加 这可以在VisualStudio诊断工具中看到 MainPage.xaml:C# 听力记忆增强UWP,c#,uwp,windows-10-universal,C#,Uwp,Windows 10 Universal,我制作了两个听力图,一个用于计算机语音,一个用于麦克风。在这两种情况下,一旦图形启动,即使在计算机语音图形的情况下,即使在句子完成后,内存也会继续增加。即使在麦克风图形中,当图形启动时,内存也会继续增加 这可以在VisualStudio诊断工具中看到 MainPage.xaml: <Grid> <Button x:Name="BtnMicrophone" Content="Start Graph Microphone&quo
<Grid>
<Button x:Name="BtnMicrophone" Content="Start Graph Microphone" Click="BtnMicrophone_Click" Margin="10,47,0,0" VerticalAlignment="Top"/>
<Button x:Name="BtnComputerVoice" Content="Start Graph Computer" Click="BtnComputerVoice_Click" Margin="10,10,0,0" VerticalAlignment="Top" Width="169"/>
</Grid>
如何解决启动grap后出现的内存不断增加的问题
提前谢谢 内存不断增加的原因来自
AudioFrameOutputNode
,该节点旨在允许开发人员自定义代码,以接收和处理从音频图形输出的音频数据
从您当前提供的代码来看,不需要音频数据处理代码,因此不需要添加此输出节点
您可以尝试注释以下代码,内存将停止无限增长:
//frameOutputNode=graph.CreateFrameOutputNode();
//mediaInput.AddOutgoingConnection(frameOutputNode);
//frameOutputNode.OutgoingGain=4;
我需要对fftHello进行音频数据处理,如果您需要AudioFrameOutputNode
,那么您可以在执行相关处理程序后及时释放该节点或生成的AudioFrame
。是否可以举个小例子?您好,在您当前提供的代码中,我没有找到您如何处理AudioFrame
的代码。如果您所指的FFT涉及第三方库,请查看相关库的文档或咨询库的开发人员
MediaSourceAudioInputNode mediaInput;
AudioFrameOutputNode frameOutputNode;
AudioDeviceOutputNode deviceOutput;
MediaSource mediaVoice;
AudioGraph graph;
public MainPage()
{
this.InitializeComponent();
}
protected override async void OnNavigatedTo(NavigationEventArgs e)
{
await InitializeAudioGraph();
}
private async void BtnComputerVoice_Click(object sender, RoutedEventArgs e)
{
graph.Stop();
graph.ResetAllNodes();
//Start AudioGraph Computer
var synth = new Windows.Media.SpeechSynthesis.SpeechSynthesizer();
SpeechSynthesisStream stream = null;
stream = await synth.SynthesizeTextToStreamAsync("Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.");
synth.Dispose();
mediaVoice = MediaSource.CreateFromStream(stream, stream.ContentType);
CreateMediaSourceAudioInputNodeResult fileInputResult = await graph.CreateMediaSourceAudioInputNodeAsync(mediaVoice);
if (MediaSourceAudioInputNodeCreationStatus.Success != fileInputResult.Status) { return; }
mediaInput = fileInputResult.Node;
mediaInput.AddOutgoingConnection(deviceOutput);
frameOutputNode = graph.CreateFrameOutputNode();
mediaInput.AddOutgoingConnection(frameOutputNode);
frameOutputNode.OutgoingGain = 4;
graph.Start();
}
private async void BtnMicrophone_Click(object sender, RoutedEventArgs e)
{
if (graph != null)
{
graph.Stop();
graph.ResetAllNodes();
}
if (mediaInput != null)
{
mediaInput.Stop();
mediaInput.Dispose();
mediaInput = null;
}
//Start AudioGraph Microphone
CreateAudioDeviceInputNodeResult fileInputResult = await graph.CreateDeviceInputNodeAsync(Windows.Media.Capture.MediaCategory.Speech);
if (AudioDeviceNodeCreationStatus.Success != fileInputResult.Status)
{
return;
}
AudioDeviceInputNode deviceInput = fileInputResult.DeviceInputNode;
frameOutputNode = graph.CreateFrameOutputNode();
deviceInput.AddOutgoingConnection(frameOutputNode);
graph.Start();
}
public async Task InitializeAudioGraph()
{
// Create an AudioGraph with default settings
AudioGraphSettings settings = new AudioGraphSettings(AudioRenderCategory.Media);
CreateAudioGraphResult result = await AudioGraph.CreateAsync(settings);
if (result.Status != AudioGraphCreationStatus.Success)
{
return;
}
graph = result.Graph;
// Create a device output node
CreateAudioDeviceOutputNodeResult deviceOutputNodeResult = await graph.CreateDeviceOutputNodeAsync();
if (deviceOutputNodeResult.Status != AudioDeviceNodeCreationStatus.Success)
{
return;
}
deviceOutput = deviceOutputNodeResult.DeviceOutputNode;
}