遠(yuǎn)程視頻監(jiān)控系統(tǒng)大學(xué)畢業(yè)論文外文文獻(xiàn)翻譯及原文_第1頁(yè)
遠(yuǎn)程視頻監(jiān)控系統(tǒng)大學(xué)畢業(yè)論文外文文獻(xiàn)翻譯及原文_第2頁(yè)
遠(yuǎn)程視頻監(jiān)控系統(tǒng)大學(xué)畢業(yè)論文外文文獻(xiàn)翻譯及原文_第3頁(yè)
遠(yuǎn)程視頻監(jiān)控系統(tǒng)大學(xué)畢業(yè)論文外文文獻(xiàn)翻譯及原文_第4頁(yè)
遠(yuǎn)程視頻監(jiān)控系統(tǒng)大學(xué)畢業(yè)論文外文文獻(xiàn)翻譯及原文_第5頁(yè)
已閱讀5頁(yè),還剩3頁(yè)未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、畢業(yè)設(shè)計(jì)(論文) 外文文獻(xiàn)翻譯文獻(xiàn)、資料中文題目: 遠(yuǎn)程視頻監(jiān)控系統(tǒng)文獻(xiàn)、資料英文題目:文獻(xiàn)、資料來(lái)源:文獻(xiàn)、資料發(fā)表(出版)日期: 院(部):專 業(yè): 電子信息工程班 級(jí): 姓 名: 學(xué) 號(hào): 指導(dǎo)教師:翻譯日期: 2017.02.14外文文獻(xiàn)翻譯A System for Remote Video Surveillanceand MonitoringThe thrust of CMU research under the DARPA Video Surveillanee and Monitoring (VSAM) project is cooperative multi-sensor sur

2、veillanee to support battlefield aware ness. Un der our VSAM In tegrated Feasibility Dem on stratio n (IFD) con tract, we have developed automated video un dersta nding tech no logy that en ables a sin gle huma n operator to mon itor activities over a complex area using a distributed n etwork of act

3、ive video sen sors. The goal is to automatically collect and dissem in ate real-time in formatio n from the battlefield to improve the situati onal aware nessof commanders and staff. Other military and federal law enforcement applications in clude providi ng perimeter security for troops, mon itori

4、ng peace treaties or refugee moveme nts from unmanned air vehicles, providi ng security for embassies or airports, and staking out suspected drug or terrorist hide-outs by collecting time-stamped pictures of every one en teri ng and exit ing the build ing.Automated video surveilla nee is an importa

5、nt research area in the commercial sector as well. Tech no logy has reached a stagewhere moun ti ng cameras to capture video imagery is cheap, but finding available huma n resources to sit and watch that imagery is expe nsive. Surveilla nee cameras are already prevale nt in commercial establishme nt

6、s,with camera output being recorded to tapes that are either rewritte n periodically or stored in video archives. After a crime occurs -a store is robbed or a car is stolen -investigators can go back after the fact to see what happened, but of course by the n it is too late. What is n eeded is con t

7、i nu ous 24-hour mon itori ng and an alysis of video surveilla nee data to alert security officers to a burglary in progress, or to a suspicious in dividual loiteri ng in the park ing lot, while opti ons are still ope n for avoidi ng the crime.Keep ing track of people, vehicles, and their in teract

8、ions in an urba n or battlefield en vir onment is a difficult task. The role of VSAM video un dersta nding tech no logy in achiev ing this goal is to automatically“ pairde/ehicteoptem raw video,determ ine their geolocati ons, and in sert them into dyn amic sce ne visualizati on. We have developed ro

9、bust routi nes for detect ing and track ing movi ng objects. Detected objects are classified into sema ntic categories such as huma n, huma n group, car, and truck using shape and color an alysis, and these labels are used to improve track ing using temporal con siste ncy con stra in ts. Further cla

10、ssificati on of huma n activity, such as walk ing and running, has also bee n achieved. Geolocati ons of labeled en tities are determ ined from their image coord in ates using either wide-baseli ne stereo from two or more overlapp ing camera views, or in tersect ion of viewi ng rays with a terrain m

11、odel from monocular views. These computed locations feed into a higher level tracking module that tasks multiple sensors with variable pan, tilt and zoom to cooperatively and continu ously track an object through the sce ne. All result ing object hypotheses from all sen sors are tran smitted as symb

12、olic data packets back to a central operator con trol un it, where they are displayed on a graphical user in terface to give a broad overview of sce ne activities. These tech no logies have bee n dem on strated through a series of yearly demos, using a testbed system developed on the urban campus of

13、 CMU.Detectio n of movi ng objects in video streams is known to be a sig nifica nt, and difficult, research problem. Aside from the intrinsic usefulness of being able to segme nt video streams into moving and backgro und comp onents, detect ing moving blobs provides a focus of atte ntio n for recog

14、niti on, classificati on, and activity an alysis, making these later processes more efficient since only “ moving 'pixels need be con sidered.There are three conven ti onal approachesto moving object detect ion: temporal differe ncing ; backgro und subtracti on; and optical flow. Temporal differ

15、e ncing is very adaptive to dyn amic en vir onmen ts, but gen erally does a poor job of extract ing allreleva nt feature pixels. Backgro und subtract ion provides the most complete feature data, but is extremely sensitive to dynamic scene changes due to lighting and extra neous eve nts. Optical flow

16、 can be used to detect in depe nden tly moving objects in the prese nee of camera moti on; however, most optical flow computati on methods are computati on ally complex, and cannot be applied to full-frame video streams in real-time without specialized hardware.stepa nd s'Un der the VSAM program

17、, CMU has developed and impleme nted three methods for movi ng object detect ion on the VSAM testbed. The first is a comb in ati on of adaptive backgro und subtracti on and three-frame differe ncing . This hybrid algorithm is very fast, and surpris in gly effective in deed, it is the primary algorit

18、hm used by the majority of the SPUs in the VSAM system .In additi on, two new prototype algorithms have been developed to address shortcomings of this standard approach. First, a mechanism for maintaining temporal object layers is developed to allow greater disambiguati on of movi ng objects that st

19、op for a while, are occluded by other objects, and that then resume motion. One limitation that affects both this method and the sta ndard algorithm is that they only work for static cameras, or in a mode for pan-tilt cameras. To overcome this limitation, a second extension has bee ndeveloped to all

20、ow backgro und subtract ion from a continu ously panning and tilt ing camera . Through clever accumulati on of image evide nee, this algorithm can be implemented in real-time on a conventional PC platform. A fourth approach to movi ng object detect ion from a movi ng airbor ne platform has also bee

21、n developed, un der a subc on tract to the Sarnoff Corporati on. This approach is based on image stabilizati on using special video process ing hardware.The current VSAM IFD testbed system and suite of video understanding tech no logies are the end result of a three-year, evoluti onary process. Impe

22、tus for this evolutio n was provided by a series of yearly dem on strati ons. The follow ing tables provide a succ inct syn opsis of the progress made duri ng the last three years in the areas of video un dersta nding tech no logy, VSAM testbed architecture, sen sor con trol algorithms, and degree o

23、f user interaction. Although the program is over now, the VSAM IFD testbed con ti nues to provide a valuable resource for the developme nt and testi ng of new video un dersta nding capabilities. Future work will be directed towards achiev ing the followi ng goals:1. better un dersta nding of huma n

24、moti on, in cludi ng segme ntati on and track ing of articulated body parts;2.improved data logging and retrieval mechanisms to support 24/7 system operati ons;3. bootstrapping functional site models through passive observation of sceneactivities;4. better detect ion and classificati on of multi-age

25、 nt eve nts and activities;5. better camera control to en able smooth object track ing at high zoom; and6. acquisition and selection of“ best views ” with the eventual goal of recognizingin dividuals in the sce ne.遠(yuǎn)程視頻監(jiān)控系統(tǒng)在美國(guó)國(guó)防部高級(jí)研究計(jì)劃局,視頻監(jiān)控系統(tǒng)項(xiàng)目下進(jìn)行的一系列監(jiān)控裝置 研究是一項(xiàng)合作性的多層傳感監(jiān)控,用以支持戰(zhàn)場(chǎng)決策。在我們的視頻監(jiān)控綜合 可行性示范條約下

26、,我們已經(jīng)研發(fā)出自動(dòng)視頻解析技術(shù), 使得單個(gè)操作員通過(guò)動(dòng) 態(tài)視頻傳感器的分布式網(wǎng)絡(luò)來(lái)監(jiān)測(cè)一個(gè)復(fù)雜區(qū)域的一系統(tǒng)活動(dòng)。我們的目標(biāo)是自動(dòng)收集和傳播實(shí)時(shí)的戰(zhàn)場(chǎng)信息,以改善戰(zhàn)場(chǎng)指揮人員的戰(zhàn)場(chǎng)環(huán)境意識(shí)。 在其他軍 事和聯(lián)邦執(zhí)法領(lǐng)域的應(yīng)用包括為部隊(duì)提供邊境安防,通過(guò)無(wú)人駕駛飛機(jī)監(jiān)控和平 條約及難民流動(dòng),保證使館和機(jī)場(chǎng)的安全,通過(guò)收集建筑物每個(gè)進(jìn)口和出口的印 時(shí)圖片識(shí)別可疑毒品和恐怖分子藏匿場(chǎng)所。自動(dòng)視頻監(jiān)控在商業(yè)領(lǐng)域同樣也是一個(gè)重要的研究課題。隨著科技的發(fā)展, 安裝攝像頭捕捉視頻圖像已經(jīng)非常廉價(jià),但是通過(guò)人為監(jiān)視圖像的成本則非常高 昂。監(jiān)視攝像頭已經(jīng)在商業(yè)機(jī)構(gòu)中普遍存在,與相機(jī)輸出記錄到磁帶或者定期重 寫或

27、者存儲(chǔ)在錄像檔案。在犯罪發(fā)生后-比如商店被竊或汽車被盜后,再查看 當(dāng)時(shí)錄像,往往為時(shí)已晚。盡管避免犯罪還有許多其他的選擇, 但現(xiàn)在需要的是 連續(xù)24小時(shí)的監(jiān)測(cè)和分析數(shù)據(jù),由視頻監(jiān)控系統(tǒng)提醒保安人員,及時(shí)發(fā)現(xiàn)正在進(jìn) 行的盜竊案,或游蕩在停車場(chǎng)的可疑人員。在城市或戰(zhàn)場(chǎng)環(huán)境中追蹤人員、車輛是一項(xiàng)艱巨的任務(wù)。VSAM視頻解析技術(shù)視頻,確定其geolocations,并插入到動(dòng)態(tài)場(chǎng)景可視化。我們已經(jīng)開(kāi)發(fā)強(qiáng)有 力的例程為發(fā)現(xiàn)和跟蹤移動(dòng)的物體。被測(cè)物體分為語(yǔ)義類別,如人力,人力組, 汽車和卡車使用形狀和顏色分析,這些標(biāo)簽是用來(lái)改善跟蹤一致性使用時(shí)間限 制。進(jìn)一步分類的人類活動(dòng),如散步,跑步,也取得了。Geo

28、locatio ns標(biāo)記實(shí)體決心從自己的形象坐標(biāo)使用廣泛的基準(zhǔn)立體聲由兩個(gè)或兩個(gè)以上的重疊相機(jī) 的意見(jiàn),或查看射線相交的地形模型由單眼意見(jiàn)。 這些計(jì)算機(jī)的位置飼料進(jìn)入了 更高一級(jí)的跟蹤模塊,任務(wù)多傳感器變盤,傾斜和縮放,以合作,不斷追蹤的對(duì) 象,通過(guò)現(xiàn)場(chǎng)。所有產(chǎn)生的對(duì)象假設(shè)所有傳感器轉(zhuǎn)交作為象征性的數(shù)據(jù)包返回到 一個(gè)中央控制單元操作者,他們都顯示在圖形用戶界面提供了廣泛概述了現(xiàn)場(chǎng)活 動(dòng)。這些技術(shù)已證明,通過(guò)一系列每年演示,使用的試驗(yàn)系統(tǒng)上發(fā)展起來(lái)的城市 校園的債務(wù)工具中央結(jié)算系統(tǒng)。檢測(cè)移動(dòng)物體的視頻流被認(rèn)為是一個(gè)重要和困難,研究問(wèn)題。除了固有的作用能夠部分進(jìn)入移動(dòng)視頻流和背景的組成部分, 移動(dòng)塊檢測(cè)提供了一個(gè)關(guān)注的焦 點(diǎn)識(shí)別,分類,分析和活動(dòng),使這些后來(lái)過(guò)程更有效率,因?yàn)橹挥小耙苿?dòng)”像素 需要加以考慮。有三種

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論