iOS 10とXcode 8とがSiri機能ステップを作成します。詳細はOCで書かれています。


iOS 10が出たら、私たち開発者もSiriのような機能を使うことができます。どのように使うかを見てみましょう。実は彼はSiriの中の音声認識フレームワークSpeech frame eworkを使っています。いくつかの主要なコードを見てみましょう。UITTextViewとUButtonが必要です。
ステップ1:属性を定義する

@interface ViewController () <SFSpeechRecognizerDelegate>
@property (strong, nonatomic) UIButton *siriBtu;
@property (strong, nonatomic) UITextView *siriTextView;
@property (strong, nonatomic) SFSpeechRecognitionTask *recognitionTask;
@property (strong, nonatomic)SFSpeechRecognizer *speechRecognizer;
@property (strong, nonatomic) SFSpeechAudioBufferRecognitionRequest *recognitionRequest;
@property (strong, nonatomic)AVAudioEngine *audioEngine;
@end
第二ステップ:音声認識検査を行う。

- (void)viewDidLoad {
[super viewDidLoad];
NSLocale *cale = [[NSLocale alloc]initWithLocaleIdentifier:@"zh-CN"];
self.speechRecognizer = [[SFSpeechRecognizer alloc]initWithLocale:cale];
self.siriBtu.enabled = false;
_speechRecognizer.delegate = self;
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
bool isButtonEnabled = false;
switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
isButtonEnabled = true;
NSLog(@"      ");
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
isButtonEnabled = false;
NSLog(@"           ");
break;
case SFSpeechRecognizerAuthorizationStatusRestricted:
isButtonEnabled = false;
NSLog(@"             ");
break;
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
isButtonEnabled = false;
NSLog(@"        ");
break;
default:
break;
}
self.siriBtu.enabled = isButtonEnabled;
}];
self.audioEngine = [[AVAudioEngine alloc]init];
}
ステップ3:ボタンのクリックイベント

- (void)microphoneTap:(UIButton *)sender {
if ([self.audioEngine isRunning]) {
[self.audioEngine stop];
[self.recognitionRequest endAudio];
self.siriBtu.enabled = YES;
[self.siriBtu setTitle:@"    " forState:UIControlStateNormal];
}else{
[self startRecording];
[self.siriBtu setTitle:@"    " forState:UIControlStateNormal];
}}
ステップ4:音声の録音を開始し、音声をテキストに変換する

-(void)startRecording{
if (self.recognitionTask) {
[self.recognitionTask cancel];
self.recognitionTask = nil;
}
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
bool audioBool = [audioSession setCategory:AVAudioSessionCategoryRecord error:nil];
bool audioBool1= [audioSession setMode:AVAudioSessionModeMeasurement error:nil];
bool audioBool2= [audioSession setActive:true withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];
if (audioBool || audioBool1|| audioBool2) {
NSLog(@"    ");
}else{
NSLog(@"           ");
}
self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc]init];
AVAudioInputNode *inputNode = self.audioEngine.inputNode;
SFSpeechAudioBufferRecognitionRequest *recognitionRequest;
self.recognitionRequest.shouldReportPartialResults = true;
self.recognitionTask = [self.speechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
bool isFinal = false;
if (result) {
self.siriTextView.text = [[result bestTranscription] formattedString];
isFinal = [result isFinal];
}
if (error || isFinal) {
[self.audioEngine stop];
[inputNode removeTapOnBus:0];
self.recognitionRequest = nil;
self.recognitionTask = nil;
self.siriBtu.enabled = true;
}
}];
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[self.recognitionRequest appendAudioPCMBuffer:buffer];
}];
[self.audioEngine prepare];
bool audioEngineBool = [self.audioEngine startAndReturnError:nil];
NSLog(@"%d",audioEngineBool);
self.siriTextView.text = @"    !😀Siri  ,    ";
}
最後の代理方法:

-(void)speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available{
if(available){
self.siriBtu.enabled = true;
}else{
self.siriBtu.enabled = false;
}
}
これでSiriの機能が実現できます。
締め括りをつける
以上、小编でご绍介したiOS 10とXcode 8がSiri机能ステップを作成しました。皆様に助けてほしいです。もし何かご质问があれば、メッセージをください。小编はすぐにご返事します。ここでも私たちのサイトを応援してくれてありがとうございます。