計算機網絡報告模板_第1頁
計算機網絡報告模板_第2頁
計算機網絡報告模板_第3頁
計算機網絡報告模板_第4頁
計算機網絡報告模板_第5頁
已閱讀5頁,還剩7頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、County continuation records has examined and approved the draft, spirit, believe, comprehensive Yearbook of zhuanglang already prepared draft, entered the phase of evaluation. Civil air defense workPAGE County continuation records has examined and approved the draft, spirit, believe, comprehensive Y

2、earbook of zhuanglang already prepared draft, entered the phase of evaluation. Civil air defense workPAGE 12County continuation records has examined and approved the draft, spirit, believe, comprehensive Yearbook of zhuanglang already prepared draft, entered the phase of evaluation. Civil air defens

3、e work 計算機網絡報告題目: 出處: 班級: 學號: 姓名: Speech recognitionHistoryOne of the most notable domains for the commercial application of speech recognition in the United States has been health care and in particular the work of the HYPERLINK /wiki/Medical_transcription o Medical transcription medical transcript

4、ionist . According to industry experts, at its inception, speech recognition (SR) was sold as a way to completely eliminate transcription rather than make the transcription process more efficient, hence it was not accepted. It was also the case that SR at that time was often technically deficient. A

5、dditionally, to be used effectively, it required changes to the ways physicians worked and documented clinical encounters, which many if not all were reluctant to do. The biggest limitation to speech recognition automating transcription, however, is seen as the software. The nature of narrative dict

6、ation is highly interpretive and often requires judgment that may be provided by a real human but not yet by an automated system. Another limitation has been the extensive amount of time required by the user and/or system provider to train the software.A distinction in ASR is often made between arti

7、ficial syntax systems which are usually domain-specific and natural language processing which is usually language-specific. Each of these types of application presents its own particular goals and challenges.ApplicationsHealth careIn the HYPERLINK /wiki/Health_care o Health care health care domain,

8、even in the wake of improving speech recognition technologies, medical transcriptionists (MTs) have not yet become obsolete. Many experts in the field anticipate that with increased use of speech recognition technology, the services provided may be redistributed rather than replaced.Speech recogniti

9、on can be implemented in front-end or back-end of the medical documentation process.Front-End SR is where the provider dictates into a speech-recognition engine, the recognized words are displayed right after they are spoken, and the dictator is responsible for editing and signing off on the documen

10、t. It never goes through an editor.Back-End SR is where the provider dictates into a digital dictation system, and the voice is routed through a speech-recognition machine and the recognized draft document is routed along with the original voice file to the editor, who edits the draft and finalizes

11、the report. Deferred SR is being widely used in the industry currently.Many HYPERLINK /wiki/Electronic_Medical_Records o Electronic Medical Records Electronic Medical Records (EMR) applications can be more effective and may be performed more easily when deployed in conjunction with a speech-recognit

12、ion engine. Searches, queries, and form filling may all be faster to perform by voice than by using a keyboard.MilitaryHigh-performance fighter aircraftSubstantial efforts have been devoted in the last decade to the test and evaluation of speech recognition in fighter aircraft. Of particular note ar

13、e the U.S. program in speech recognition for the Advanced Fighter Technology Integration (AFTI)/ HYPERLINK /wiki/F-16 o F-16 F-16 aircraft ( HYPERLINK /wiki/F-16_VISTA o F-16 VISTA F-16 VISTA), the program in France on installing speech recognition systems on HYPERLINK /wiki/Mirage_(aircraft) o Mira

14、ge (aircraft) Mirage aircraft, and programs in the UK dealing with a variety of aircraft platforms. In these programs, speech recognizers have been operated successfully in fighter aircraft with applications including: setting radio frequencies, commanding an autopilot system, setting steer-point co

15、ordinates and weapons release parameters, and controlling flight displays. Generally, only very limited, constrained vocabularies have been used successfully, and a major effort has been devoted to integration of the speech recognizer with the avionics system.Some important conclusions from the work

16、 were as follows:Speech recognition has definite potential for reducing pilot workload, but this potential was not realized consistently. Achievement of very high recognition accuracy (95% or more) was the most critical factor for making the speech recognition system useful with lower recognition ra

17、tes, pilots would not use the system. More natural vocabulary and grammar, and shorter training times would be useful, but only if very high recognition rates could be maintained. Laboratory research in robust speech recognition for military environments has produced promising results which, if exte

18、ndable to the cockpit, should improve the utility of speech recognition in high-performance aircraft.Working with Swedish pilots flying in the HYPERLINK /wiki/JAS-39 o JAS-39 JAS-39 Gripen cockpit, Englund (2004) found recognition deteriorated with increasing G-loads. It was also concluded that adap

19、tation greatly improved the results in all cases and introducing models for breathing was shown to improve recognition scores significantly. Contrary to what might be expected, no effects of the broken English of the speakers were found. It was evident that spontaneous speech caused problems for the

20、 recognizer, as could be expected. A restricted vocabulary, and above all, a proper syntax, could thus be expected to improve recognition accuracy substantially.The HYPERLINK /wiki/Eurofighter_Typhoon o Eurofighter Typhoon Eurofighter Typhoon currently in service with the UK HYPERLINK /wiki/RAF o RA

21、F RAF employs a speaker-dependent system, i.e. it requires each pilot to create a template. The system is not used for any safety critical or weapon critical tasks, such as weapon release or lowering of the undercarriage, but is used for a wide range of other HYPERLINK /wiki/Cockpit o Cockpit cockpi

22、t functions. Voice commands are confirmed by visual and/or aural feedback. The system is seen as a major design feature in the reduction of pilot HYPERLINK /wiki/Workload o Workload workload, and even allows the pilot to assign targets to himself with two simple voice commands or to any of his wingm

23、en with only five commands. HelicoptersThe problems of achieving high recognition accuracy under stress and noise pertain strongly to the helicopter environment as well as to the fighter environment. The acoustic noise problem is actually more severe in the helicopter environment, not only because o

24、f the high noise levels but also because the helicopter pilot generally does not wear a facemask, which would reduce acoustic noise in the microphone. Substantial test and evaluation programs have been carried out in the past decade in speech recognition systems applications in helicopters, notably

25、by the U.S. Army Avionics Research and Development Activity (AVRADA) and by the Royal Aerospace Establishment (RAE) in the UK. Work in France has included speech recognition in the Puma helicopter. There has also been much useful work in Canada. Results have been encouraging, and voice applications

26、have included: control of communication radios; setting of navigation systems; and control of an automated target handover system.As in fighter applications, the overriding issue for voice in helicopters is the impact on pilot effectiveness. Encouraging results are reported for the AVRADA tests, alt

27、hough these represent only a feasibility demonstration in a test environment. Much remains to be done both in speech recognition and in overall speech recognition technology, in order to consistently achieve performance improvements in operational settings.Battle managementBattle management command

28、centres generally require rapid access to and control of large, rapidly changing information databases. Commanders and system operators need to query these databases as conveniently as possible, in an eyes-busy environment where much of the information is presented in a display format. Human machine

29、 interaction by voice has the potential to be very useful in these environments. A number of efforts have been undertaken to interface commercially available isolated-word recognizers into battle management environments. In one feasibility study, speech recognition equipment was tested in conjunctio

30、n with an integrated information display for naval battle management applications. Users were very optimistic about the potential of the system, although capabilities were limited.Speech understanding programs sponsored by the Defense Advanced Research Projects Agency (DARPA) in the U.S. has focused

31、 on this problem of natural speech interface. Speech recognition efforts have focused on a database of continuous speech recognition (CSR), large-vocabulary speech which is designed to be representative of the naval resource management task. Significant advances in the state-of-the-art in CSR have b

32、een achieved, and current efforts are focused on integrating speech recognition and natural language processing to allow spoken language interaction with a naval resource management system.Training air traffic controllersTraining for military (or civilian) air traffic controllers (ATC) represents an

33、 excellent application for speech recognition systems. Many ATC training systems currently require a person to act as a pseudo-pilot, engaging in a voice dialog with the trainee controller, which simulates the dialog which the controller would have to conduct with pilots in a real ATC situation. Spe

34、ech recognition and synthesis techniques offer the potential to eliminate the need for a person to act as pseudo-pilot, thus reducing training and support personnel. Air controller tasks are also characterized by highly structured speech as the primary output of the controller, hence reducing the di

35、fficulty of the speech recognition task.The U.S. Naval Training Equipment Center has sponsored a number of developments of prototype ATC trainers using speech recognition. Generally, the recognition accuracy falls short of providing graceful interaction between the trainee and the system. However, t

36、he prototype training systems have demonstrated a significant potential for voice interaction in these systems, and in other training applications. The U.S. Navy has sponsored a large-scale effort in ATC training systems, where a commercial speech recognition unit was integrated with a complex train

37、ing system including displays and scenario creation. Although the recognizer was constrained in vocabulary, one of the goals of the training programs was to teach the controllers to speak in a constrained language, using specific vocabulary specifically designed for the ATC task. Research in France

38、has focused on the application of speech recognition in ATC training systems, directed at issues both in speech recognition and in application of task-domain grammar constraints. The USAF, USMC, US Army, and FAA are currently using ATC simulators with speech recognition from a number of different ve

39、ndors, including UFA, Inc. , and Adacel Systems Inc (ASI). This software uses speech recognition and synthetic speech to enable the trainee to control aircraft and ground vehicles in the simulation without the need for pseudo pilots.Another approach to ATC simulation with speech recognition has been

40、 created by Supremis. The Supremis system is not constrained by rigid grammars imposed by the underlying limitations of other recognition strategies.Telephony and other domainsASR in the field of telephony is now commonplace and in the field of computer gaming and simulation is becoming more widespr

41、ead. Despite the high level of integration with word processing in general personal computing, however, ASR in the field of document production has not seen the expected increases in use.The improvement of mobile processor speeds made feasible the speech-enabled Symbian and Windows Mobile Smartphone

42、s. Current speech-to-text programs are too large and require too much CPU power to be practical for the Pocket PC. Speech is used mostly as a part of User Interface, for creating pre-defined or custom speech commands. Leading software vendors in this field are: Microsoft Corporation (Microsoft Voice

43、 Command); Nuance Communications (Nuance Voice Control); Vito Technology (VITO Voice2Go); Speereo Software (Speereo Voice Translator). MyCaption for BlackBerry .People with DisabilitiesPeople with disabilities are another part of the population that benefit from using speech recognition programs. It

44、 is especially useful for people who have difficulty with or are unable to use their hands, from mild repetitive stress injuries to involved disabilities that require alternative input for support with accessing the computer. In fact, people who used the keyboard a lot and developed HYPERLINK /wiki/

45、Repetitive_Strain_Injury o Repetitive Strain Injury RSI became an urgent early market for speech recognition. Speech recognition is used in HYPERLINK /wiki/Deaf o Deaf deaf HYPERLINK /wiki/Telephony o Telephony telephony, such as HYPERLINK /wiki/Spinvox o Spinvox spinvox voice-to-text voicemail, HYP

46、ERLINK /wiki/Relay_services o Relay services relay services, and HYPERLINK /wiki/Telecommunications_Relay_Service l Captioned_telephone o Telecommunications Relay Service captioned telephone. Individuals with learning disabilities who have problems with thought to paper communication (essentially th

47、ey think of an idea but it is processed incorrectly causing it to end up differently on paper) can benefit from the software as it helps to overlap that weakness.語音識別發(fā)展歷史商業(yè)應用的語音識別在美國是保健和medical transcriptionist等最顯著的領域之一 。據業(yè)界專家稱,在其成立以來,語音識別(簡稱SR)是為了徹底消除轉錄,而不是使轉錄過程更有效率,因此是不能接受的。也有人認為,SR在那個時候往往是在技術上缺乏。

48、此外,為了有效地加以利用,它需要改變工作方式和記錄醫(yī)師臨床實踐,其中甚至有許多不是所有的人愿意做的事。然而,語音識別自動化轉錄,被看作是該軟件最大的限制。記敘一件事的本質是在聽寫時非常需要解釋和判斷,這需要一個真正的人,而不是通過自動化系統(tǒng)。另一個限制是用戶和或系統(tǒng)供應商,培訓軟件所需的時間普遍的比較長。 ASR之間的區(qū)分往往是特定領域的“人工語法系統(tǒng)”和特定于語言的“自然語言處理”。所有這些類型的應用都有自己的特定目標和挑戰(zhàn)。應用衛(wèi)生保健在衛(wèi)生保健領域,即使語音識別技術不斷改進,medical transcriptionists ( MTS )還是不會過時。許多該領域的專家預測,隨著語音識別

49、技術的廣泛使用,MTS提供的服務可能會重新分配,而不是取代。語音識別可以在醫(yī)療文件的進程前端或后端實施。SR的前端是要求供應商裝入語音識別引擎,在他們說完話后正確顯示出來,使用者負責編輯,并簽署了有關文件。它從未審閱編輯。后端SR是提供者裝入一個數字聽寫轉換系統(tǒng)的地方,聲音通過語言識別機器尋址,并且被認可的草案與原始的聲音文件一起給編輯者,編輯草稿并且完成報告。 延期的SR是在當前產業(yè)用途廣泛。當部署與語言識別引擎一道,許多電子病歷(EMR)應用是更加有效的,并且也更加容易地執(zhí)行。查尋、詢問和填表全部由聲音執(zhí)行比通過使用鍵盤更快速。軍事高性能戰(zhàn)斗機在過去十年中我們都花了大量的努力來測試和評價語

50、音識別的戰(zhàn)斗機。特別值得注意的是,美國計劃為先進戰(zhàn)斗機( AFTI ) / F - 16戰(zhàn)斗機( F - 16型Vista )安裝語音識別的集成技術,法國也計劃在幻影飛機安裝語音識別系統(tǒng),英國計劃安裝在不同的飛機平臺。這些程序中,成功在戰(zhàn)斗機中應用語音識別的包括:設置無線電頻率,指揮一個自動系統(tǒng),設置引導點坐標和武器釋放參數,并控制飛行顯示器。通常,只非常順利地使用了有限,拘束的詞匯量,并且主要努力致力于語音識別與航空電子學系統(tǒng)的綜合化。一些重要結論的工作如下:1.語音識別具有一定的潛力,減少飛行員的工作量,但這種潛力始終沒能實現(xiàn)。2.實現(xiàn)非常高的識別率( 95 或以上)是使語音識別系統(tǒng)有用和

51、識別速率降低的最關鍵因素,飛行員不會使用該系統(tǒng)。3.更自然的詞匯和語法,和更短的培訓時間將是有益的,但前提是非常高的識別率可維持不變。實驗室研究的在軍事環(huán)境中抗噪語音識別產生了可喜的成果,如果擴展到駕駛艙,應能改進應用語音識別的高性能飛機。在JAS-39 Gripen駕駛艙內的瑞典飛行員 Englund (2004)發(fā)現(xiàn)了公認惡化隨著G過載。 也結束適應在所有的情況下很大地改進了結果,并且介紹呼吸的模型顯示極大改進公認比分。相反,可以預期,沒有任何作用的蹩腳的英語發(fā)言者被發(fā)現(xiàn)。 很明顯,可以預期,造成問題的自發(fā)講話的語音識別。 受限的詞匯,最重要的是,一個適當的語法,能因而期望極大地改進公認準

52、確性。目前在英國皇家空軍擁有揚聲器依賴系統(tǒng)歐洲臺風,它要求每個試點創(chuàng)建一個模板。該系統(tǒng)不作任何安全關鍵或核武器的關鍵任務,如核武器釋放或降低腳架,但廣泛應用于其他駕駛艙職能。語音指令通過視覺和聽覺反饋來確認。該系統(tǒng)被看作是一個主要的設計功能,減少飛行員的工作量,甚至可以以自己的兩個簡單的語音命令,或任何與他的飛行員只有五個命令改變試點的目標。直升機在高壓和強烈噪音干擾的直升機環(huán)境實現(xiàn)高識別率的問題和戰(zhàn)斗機環(huán)境一眼好。實際上,在直升機的環(huán)境,噪音問題更為嚴重,不僅是因為高噪音水平,而且還因為,直升機飛行員一般不戴減少麥克風噪音的口罩。語音識別系統(tǒng)在過去十年中已經進行了大量的試驗和評估程序并應用在直升機,特別是美國陸軍航空電子研究和發(fā)展活動(

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論