視頻處理_跟蹤和監(jiān)控性能評估國際專題討論會(PETS)200_第1頁
視頻處理_跟蹤和監(jiān)控性能評估國際專題討論會(PETS)200_第2頁
視頻處理_跟蹤和監(jiān)控性能評估國際專題討論會(PETS)200_第3頁
視頻處理_跟蹤和監(jiān)控性能評估國際專題討論會(PETS)200_第4頁
視頻處理_跟蹤和監(jiān)控性能評估國際專題討論會(PETS)200_第5頁
已閱讀5頁,還剩2頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報或認(rèn)領(lǐng)

文檔簡介

1、IEEE International Workshop on PerformanceEvaluation of Tracking andSurveillance(PETS2001(IEEE跟蹤和監(jiān)控性能評估國際專題討論會(PETS2001數(shù)據(jù)摘要:PETS'2001 consists of five separate sets of training and test sequences, i.e. each set consists of one training sequence and one test sequence.All the datasets are multi-vi

2、ew (2 cameras and are significantly more challenging than for PETS'2000 in terms of significant lighting variation, occlusion, scene activity and use of multi-view data.中文關(guān)鍵詞:PETS2001, 跟蹤, 監(jiān)控, 多視角, 光線變化, 遮擋,英文關(guān)鍵詞:PETS2001,tracking,surveillance,multi-view,lightingvariation,occlusion,數(shù)據(jù)格式:VIDEO數(shù)據(jù)用

3、途:Outdoor people and vehicle tracking (two synchronised views; includes omnidirectional and moving camera (annotation available.數(shù)據(jù)詳細(xì)介紹:PETS2001 DatasetsDatasetPETS'2001 consists of five separate sets of training and test sequences, i.e. each set consists of one training sequence and one test seq

4、uence. All the datasets are multi-view (2 cameras and are significantly more challenging than for PETS'2000 in terms of significant lighting variation, occlusion, scene activity and use of multi-view data.The annotation (ground truth for the datasets is available .Dataset 1 (training = 3064 fram

5、es, testing = 2688 frames: Moving people and vehicles. The camera calibration may be found . Dataset 2 (training = 2989 frames, testing = 2823 frames: Moving people and vehicles. The camera calibration may be found . Dataset 3 (training = 5563 frames, testing = 5336 frames: Moving people. This is a

6、more challenging sequence in terms of multiple targets and significant lighting variation. The camera calibration may be found . Dataset 4 (training = 6789 frames, testing = 5010 frames: Moving people and vehicles (catadioptric vision - one narrow field of view, one panoramic. The camera calibration

7、 may be found . Dataset 5 (training = 2866 frames, testing = 2867 frames: Moving vehicle (forward and rear views. The camera calibration may be found . For each datasetthe training directory contains a training set of frames in both QuickTime movie format with Motion JpegA compression and as individ

8、ual Jpeg images. The directory contains frame synchronised images for the two cameras. the test directory contains a training set of frames in both QuickTime movie format with Motion JpegA compression and as individual Jpeg images. The directory contains frame synchronised images for the two cameras

9、. The annotation (ground truth for the datasets is available .VERY IMPORTANT INFORMATIONThe tracking results that you report in your paper submissionshould be performed using the test sequences, but the trainingsequences may optionally be used if the algorithms require it forlearning etc. may be bas

10、ed on a single camera view of the scene (CAMERA1, or using the dual-view data (the approach used must be clearly stated in the paper. The tracking can be performed on the entire test sequence, or a portion of it. The images may be converted to any other format as appropriate, e.g. subsampled to half

11、-PAL, or converted to monochrome. All results reported in the paper should clearly indicate which part of the test sequence is used, ideally with reference to frame numbers where appropriate.The tracking results must be submitted along with the paper, with the tracking results generated in This will

12、 be straightforward and should not add a significant overhead to your effort. The results you provide will be used to perform automatic performance evaluation.The paper that you submit may be based on previously published tracking methods/algorithms (including papers submitted to the main CVPRconfer

13、ence. The importance is that your paper MUST report results on tracking using the datasets supplied.The recommendation is that: you attempt to track the objects (people or vehicles or both in Datasets 1 or 2 first and report tracking results on the test sequences for these datasets in the paper. (no

14、te that you need only use single-view databut you may use both views of the same scene (frame synchronised if your algorithms/methods of tracking allow it. The pre-requisite for submitting your paper to the workshop is that you minimally report tracking results on either OR both of datasets 1 and 2. if you have been successfully completed the above, attempt Dataset 3 & possibly 4 (if you are interested in catadioptric vision which are significantly longer in length and include significant ligh

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論