Intelligent Video Surveillance Systems

Specificaties
Gebonden, 340 blz. | Engels
John Wiley & Sons | e druk, 2012
ISBN13: 9781848214330
Rubricering
Juridisch :
John Wiley & Sons e druk, 2012 9781848214330
Onderdeel van serie ISTE
Verwachte levertijd ongeveer 16 werkdagen

Samenvatting

Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.
The aims of this book are to highlight the operational attempts of video analytics, to identify possible driving forces behind potential evolutions in years to come, and above all to present the state of the art and the technological hurdles which have yet to be overcome. The need for video surveillance is introduced through two major applications (the security of rail transportation systems and a posteriori investigation). The characteristics of the videos considered are presented through the cameras which enable capture and the compression methods which allow us to transport and store them. Technical topics are then discussed the analysis of objects of interest (detection, tracking and recognition), high–level video analysis, which aims to give a semantic interpretation of the observed scene (events, behaviors, types of content). The book concludes with the problem of performance evaluation.

Specificaties

ISBN13:9781848214330
Taal:Engels
Bindwijze:gebonden
Aantal pagina's:340
Serie:ISTE

Inhoudsopgave

<p>Introduction xiii<br /> Jean–Yves DUFOUR and Phlippe MOUTTOU<br /> &nbsp;<br /> Chapter 1. Image Processing: Overview and Perspectives 1<br /> Henri MA&Icirc;TRE</p>
<p>1.1. Half a century ago 1</p>
<p>1.2. The use of images 3</p>
<p>1.3. Strengths and weaknesses of image processing 4</p>
<p>1.3.1. What are these theoretical problems that image processing has been unable to overcome?&nbsp; 5</p>
<p>1.3.2. What are the problems that image processing has overcome? 5</p>
<p>1.4. What is left for the future? 6</p>
<p>1.5. Bibliography 9</p>
<p>Chapter 2. Focus on Railway Transport 13<br /> S&eacute;bastien AMBELLOUIS and Jean–Luc BRUYELLE</p>
<p>2.1. Introduction. 13</p>
<p>2.2. Surveillance of railway infrastructures 15</p>
<p>2.2.1. Needs analysis 15</p>
<p>2.2.2. Which architectures? 16</p>
<p>2.2.3. Detection and analysis of complex events&nbsp; 17</p>
<p>2.2.4. Surveillance of outside infrastructures 20</p>
<p>2.3. Onboard surveillance 21</p>
<p>2.3.1. Surveillance of buses 22</p>
<p>2.3.2. Applications to railway transport 23</p>
<p>2.4. Conclusion 28</p>
<p>2.5. Bibliography 30</p>
<p>Chapter 3. A Posteriori Analysis for Investigative Purposes 33<br /> Denis MARRAUD, Benjamin C&Eacute;PAS, Jean–Fran&ccedil;ois SULZER, Christianne MULAT and Florence S&Egrave;DES</p>
<p>3.1. Introduction 33</p>
<p>3.2. Requirements in tools for assisted investigation 34</p>
<p>3.2.1. Prevention and security 34</p>
<p>3.2.2. Information gathering 35</p>
<p>3.2.3. Inquiry 36</p>
<p>3.3. Collection and storage of data 36</p>
<p>3.3.1. Requirements in terms of standardization&nbsp; 37</p>
<p>3.3.2. Attempts at standardization (AFNOR and ISO) 37</p>
<p>3.4. Exploitation of the data 39</p>
<p>3.4.1. Content–based indexing 39</p>
<p>3.4.2. Assisted investigation tools 43</p>
<p>3.5. Conclusion 44</p>
<p>3.6. Bibliography 45</p>
<p>Chapter 4. Video Surveillance Cameras 47<br /> C&eacute;dric LE BARZ and Thierry LAMARQUE</p>
<p>4.1. Introduction 47</p>
<p>4.2. Constraints 48</p>
<p>4.2.1. Financial constraints 48</p>
<p>4.2.2. Environmental constraints 49</p>
<p>4.3. Nature of the information captured 49</p>
<p>4.3.1. Spectral bands 50</p>
<p>4.3.2. 3D or 2D + Z imaging 51</p>
<p>4.4. Video formats 53</p>
<p>4.5. Technologies 55</p>
<p>4.6. Interfaces: from analog to IP 57</p>
<p>4.6.1. From analog to digital 57</p>
<p>4.6.2. The advent of IP&nbsp;&nbsp; 59</p>
<p>4.6.3. Standards. 60</p>
<p>4.7. Smart cameras 61</p>
<p>4.8. Conclusion 62</p>
<p>4.9. Bibliography 63<br /> &nbsp;<br /> Chapter 5. Video Compression Formats 65<br /> Marc LENY and Didier NICHOLSON</p>
<p>5.1. Introduction 65</p>
<p>5.2. Video formats 66</p>
<p>5.2.1. Analog video signals 66</p>
<p>5.2.2. Digital video: standard definition 67</p>
<p>5.2.3. High definition&nbsp; 68</p>
<p>5.2.4. The CIF group of formats 69</p>
<p>5.3. Principles of video compression 70</p>
<p>5.3.1. Spatial redundancy 70</p>
<p>5.3.2. Temporal redundancy 73</p>
<p>5.4. Compression standards 74</p>
<p>5.4.1. MPEG–2 74</p>
<p>5.4.2. MPEG–4 Part 2 75</p>
<p>5.4.3. MPEG–4 Part 10/H.264 AVC 77</p>
<p>5.4.4. MPEG–4 Part 10/H.264 SVC 79</p>
<p>5.4.5. Motion JPEG 2000 80</p>
<p>5.4.6. Summary of the formats used in video surveillance&nbsp; 82</p>
<p>5.5. Conclusion 83</p>
<p>5.6. Bibliography 84</p>
<p>Chapter 6. Compressed Domain Analysis for Fast Activity Detection 87<br /> Marc LENY</p>
<p>6.1. Introduction 87</p>
<p>6.2. Processing methods 88</p>
<p>6.2.1. Use of transformed coefficients in the frequency domain 88</p>
<p>6.2.2. Use of motion estimation 90</p>
<p>6.2.3. Hybrid approaches 91</p>
<p>6.3. Uses of analysis of the compressed domain 93</p>
<p>6.3.1. General architecture 94</p>
<p>6.3.2. Functions for which compressed domain analysis is reliable&nbsp; 96</p>
<p>6.3.3. Limitations. 97</p>
<p>6.4. Conclusion 100</p>
<p>6.5. Acronyms&nbsp; 101</p>
<p>6.6. Bibliography 101</p>
<p>Chapter 7. Detection of Objects of Interest&nbsp; 103<br /> Yoann DHOME, Bertrand LUVISON, Thierry CHESNAIS, Rachid BELAROUSSI, Laurent LUCAT, Mohamed CHAOUCH and Patrick SAYD</p>
<p>7.1. Introduction. 103</p>
<p>7.2. Moving object detection 104</p>
<p>7.2.1. Object detection using background modeling 104</p>
<p>7.2.2. Motion–based detection of objects of interest 107</p>
<p>7.3. Detection by modeling of the objects of interest 109</p>
<p>7.3.1. Detection by geometric modeling&nbsp; 109</p>
<p>7.3.2. Detection by visual modeling. 111</p>
<p>7.4. Conclusion 117</p>
<p>7.5. Bibliography 118</p>
<p>Chapter 8. Tracking of Objects of Interest in a Sequence of Images&nbsp; 123<br /> Simona MAGGIO, Jean–Emmanuel HAUGEARD, Boris MEDEN, Bertrand LUVISON, Romaric AUDIGIER, Brice BURGER and Quoc Cuong PHAM</p>
<p>8.1. Introduction 123</p>
<p>8.2. Representation of objects of interest and their associated</p>
<p>visual features 124</p>
<p>8.2.1. Geometry 124</p>
<p>8.2.2. Characteristics of appearance 125</p>
<p>8.3. Geometric workspaces 127</p>
<p>8.4. Object–tracking algorithms 127</p>
<p>8.4.1. Deterministic approaches 127</p>
<p>8.4.2. Probabilistic approaches 128</p>
<p>8.5. Updating of the appearance models 132</p>
<p>8.6. Multi–target tracking 135</p>
<p>8.6.1. MHT and JPDAF 135</p>
<p>8.6.2. MCMC and RJMCMC sampling techniques 136</p>
<p>8.6.3. Interactive filters, track graph 138</p>
<p>8.7. Object tracking using a PTZ camera 138</p>
<p>8.7.1. Object tracking using a single PTZ camera only 139</p>
<p>8.7.2. Object tracking using a PTZ camera coupled with a static camera 139</p>
<p>8.8. Conclusion 141</p>
<p>8.9. Bibliography 142</p>
<p>Chapter 9. Tracking Objects of Interest Through a Camera Network 147<br /> Catherine ACHARD, S&eacute;bastien AMBELLOUIS, Boris MEDEN,S&eacute;bastien LEFEBVRE and Dung Nghi TRUONG CONG</p>
<p>9.1. Introduction 147</p>
<p>9.2. Tracking in a network of cameras whose fields of view overlap 148</p>
<p>9.2.1. Introduction and applications 148</p>
<p>9.2.2. Calibration and synchronization of a camera network 150</p>
<p>9.2.3. Description of the scene by multi–camera aggregation 153</p>
<p>9.3. Tracking through a network of cameras with non–overlapping</p>
<p>fields of view&nbsp; 155</p>
<p>9.3.1. Issues and applications 155</p>
<p>9.3.2. Geometric and/or photometric calibration of a camera</p>
<p>network 156</p>
<p>9.3.3. Reidentification of objects of interest in a camera network 157</p>
<p>9.3.4. Activity recognition/event detection in a camera network 160</p>
<p>9.4. Conclusion 161</p>
<p>9.5. Bibliography 161</p>
<p>Chapter 10. Biometric Techniques Applied to Video Surveillance 165<br /> Bernadette DORIZZI and Samuel VINSON</p>
<p>10.1. Introduction 165</p>
<p>10.2. The databases used for evaluation166</p>
<p>10.2.1. NIST–Multiple Biometrics Grand Challenge</p>
<p>(NIST–MBGC) 167</p>
<p>10.2.2. Databases of faces 167</p>
<p>10.3. Facial recognition 168</p>
<p>10.3.1. Face detection 168</p>
<p>10.3.2. Face recognition in biometrics 169</p>
<p>10.3.3. Application to video surveillance 170</p>
<p>10.4. Iris recognition 173</p>
<p>10.4.1. Methods developed for biometrics 173</p>
<p>10.4.2. Application to video surveillance 174</p>
<p>10.4.3. Systems for iris capture in videos 176</p>
<p>10.4.4. Summary and perspectives 177</p>
<p>10.5. Research projects 177</p>
<p>10.6. Conclusion 178</p>
<p>10.7. Bibliography 179</p>
<p>Chapter 11. Vehicle Recognition in Video Surveillance 183<br /> St&eacute;phane HERBIN</p>
<p>11.1. Introduction 183</p>
<p>11.2. Specificity of the context 184</p>
<p>11.2.1. Particular objects 184</p>
<p>11.2.2. Complex integrated chains 185</p>
<p>11.3. Vehicle modeling 185</p>
<p>11.3.1. Wire models 186</p>
<p>11.3.2. Global textured models 187</p>
<p>11.3.3. Structured models 188</p>
<p>11.4. Exploitation of object models 189</p>
<p>11.4.1. A conventional sequential chain with limited performance&nbsp; 189</p>
<p>11.4.2. Improving shape extraction 190</p>
<p>11.4.3. Inferring 3D information. 191</p>
<p>11.4.4. Recognition without form extraction 192</p>
<p>11.4.5. Toward a finer description of vehicles 193</p>
<p>11.5. Increasing observability 194</p>
<p>11.5.1. Moving observer 194</p>
<p>11.5.2. Multiple observers 195</p>
<p>11.6. Performances 196</p>
<p>11.7. Conclusion 196</p>
<p>11.8. Bibliography 197</p>
<p>Chapter 12. Activity Recognition 201<br /> Bernard BOULAY and Fran&ccedil;ois BR&Eacute;MOND</p>
<p>12.1. Introduction 201</p>
<p>12.2. State of the art 202</p>
<p>12.2.1. Levels of abstraction 202</p>
<p>12.2.2. Modeling and recognition of activities 203</p>
<p>12.2.3. Overview of the state of the art 206</p>
<p>12.3. Ontology 206</p>
<p>12.3.1. Objects of interest 207</p>
<p>12.3.2. Scenario models 208</p>
<p>12.3.3. Operators 209</p>
<p>12.3.4. Summary 210</p>
<p>12.4. Suggested approach: the ScReK system 210</p>
<p>12.5. Illustrations 212</p>
<p>12.5.1. Application at an airport 213</p>
<p>12.5.2. Modeling the behavior of elderly people&nbsp; 213</p>
<p>12.6. Conclusion 215</p>
<p>12.7. Bibliography 215</p>
<p>Chapter 13. Unsupervised Methods for Activity Analysis and Detection of Abnormal Events 219<br /> R&eacute;mi EMONET and Jean–Marc ODOBEZ</p>
<p>13.1. Introduction 219</p>
<p>13.2. An example of a topic model: PLSA 221</p>
<p>13.2.1. Introduction 221</p>
<p>13.2.2. The PLSA model 221</p>
<p>13.2.3. PLSA applied to videos 223</p>
<p>13.3. PLSM and temporal models 226</p>
<p>13.3.1. PLSM model 226</p>
<p>13.3.2. Motifs extracted by PLSM 228</p>
<p>13.4. Applications: counting, anomaly detection 230</p>
<p>13.4.1. Counting 230</p>
<p>13.4.2. Anomaly detection 230</p>
<p>13.4.3. Sensor selection 231</p>
<p>13.4.4. Prediction and statistics 233</p>
<p>13.5. Conclusion 233</p>
<p>13.6. Bibliography 233</p>
<p>Chapter 14. Data Mining in a Video Database 235<br /> Luis PATINO, Hamid BENHADDA and Fran&ccedil;ois BR&Eacute;MOND</p>
<p>14.1. Introduction 235</p>
<p>14.2. State of the art 236</p>
<p>Table of Contents xi</p>
<p>14.3. Pre–processing of the data 237</p>
<p>14.4. Activity analysis and automatic classification 238</p>
<p>14.4.1. Unsupervised learning of zones of activity 239</p>
<p>14.4.2. Definition of behaviors 242</p>
<p>14.4.3. Relational analysis 243</p>
<p>14.5. Results and evaluations 245</p>
<p>14.6. Conclusion 248</p>
<p>14.7. Bibliography 249</p>
<p>Chapter 15. Analysis of Crowded Scenes in Video 251<br /> Mikel RODRIGUEZ, Josef SIVIC and Ivan LAPTEV</p>
<p>15.1. Introduction 251</p>
<p>15.2. Literature review 253</p>
<p>15.2.1. Crowd motion modeling and segmentation 253</p>
<p>15.2.2. Estimating density of people in a crowded scene 254</p>
<p>15.2.3. Crowd event modeling and recognition 255</p>
<p>15.2.4. Detecting and tracking in a crowded scene 256</p>
<p>15.3. Data–driven crowd analysis in videos 257</p>
<p>15.3.1. Off–line analysis of crowd video database 258</p>
<p>15.3.2. Matching 258</p>
<p>15.3.3. Transferring learned crowd behaviors 260</p>
<p>15.3.4. Experiments and results 260</p>
<p>15.4. Density–aware person detection and tracking in crowds 262</p>
<p>15.4.1. Crowd model 263</p>
<p>15.4.2. Tracking detections 264</p>
<p>15.4.3. Evaluation 265</p>
<p>15.5. Conclusions and directions for future research 268</p>
<p>15.6. Acknowledgments 268</p>
<p>15.7. Bibliography 269</p>
<p>Chapter 16. Detection of Visual Context 273<br /> Herv&eacute; LE BORGNE and Aymen SHABOU</p>
<p>16.1. Introduction 273</p>
<p>16.2. State of the art of visual context detection 275</p>
<p>16.2.1. Overview 275</p>
<p>16.2.2. Visual description 276</p>
<p>16.2.3. Multiclass learning 278</p>
<p>16.3. Fast shared boosting 279</p>
<p>16.4. Experiments. 281</p>
<p>16.4.1. Detection of boats in the Panama Canal 281</p>
<p>16.4.2. Detection of the visual context in video surveillance&nbsp;&nbsp; 283</p>
<p>16.5. Conclusion 285</p>
<p>16.6. Bibliography 286</p>
<p>Chapter 17. Example of an Operational Evaluation Platform: PPSL&nbsp; 289<br /> St&eacute;phane BRAUDEL</p>
<p>17.1. Introduction 289</p>
<p>17.2. Use of video surveillance: approach and findings 290</p>
<p>17.3. Current use contexts and new operational concepts 292</p>
<p>17.4. Requirements in smart video processing 293</p>
<p>17.5. Conclusion 294</p>
<p>Chapter 18. Qualification and Evaluation of Performances 297<br /> Bernard BOULAY, Jean–Fran&ccedil;ois GOUDOU and Fran&ccedil;ois BR&Eacute;MOND</p>
<p>18.1. Introduction 297</p>
<p>18.2. State of the art 298</p>
<p>18.2.1. Applications 298</p>
<p>18.2.2. Process 299</p>
<p>18.3. An evaluation program: ETISEO 303</p>
<p>18.3.1. Methodology 303</p>
<p>18.3.2. Metrics 305</p>
<p>18.3.3. Summary 307</p>
<p>18.4. Toward a more generic evaluation 309</p>
<p>18.4.1. Contrast 310</p>
<p>18.4.2. Shadows 312</p>
<p>18.5. The Quasper project 312</p>
<p>18.6. Conclusion 313</p>
<p>18.7. Bibliography 314</p>
<p>List of Authors 315</p>
<p>Index&nbsp;&nbsp; 321</p>

Net verschenen

Rubrieken

    Personen

      Trefwoorden

        Intelligent Video Surveillance Systems