Neural Engineering : Computation, Representation, and Dynamics in Neurobiological Systems
Leverbaar
Preface xiii Using this book as a course text xvii Acknowledgments xix Of neurons and engineers 1(28) Explaining neural systems 3(2) Neural representation 5(8) The single neuron 9(2) Beyond the single neuron 11(2) Neural transformation 13(2) Three principles of neural engineering 15(4) Principle 1 16(1) Principle 2 17(1) Principle 3 18(1) Addendum 18(1) Methodology 19(4) System description 19(2) Design specification 21(1) Implementation 21(1) Discussion 22(1) A possible theory of neurobiological systems 23(6) I REPRESENTATION Representation in populations of neurons 29(32) Representing scalar magnitudes 30(10) Engineered representation 30(3) Biological representation 33(7) Noise and precision 40(4) Noisy neurons 40(2) Biological representation and noise 42(2) An example: Horizontal eye position 44(5) System description 44(2) Design specification 46(1) Implementation 47(1) Discussion 48(1) Representing vectors 49(3) An example: Arm movements 52(5) System description 53(1) Design specification 54(1) Implementation 55(1) Discussion 55(2) An example: Semicircular canals 57(2) System description 57(1) Implementation 58(1) Summary 59(2) Extending population representation 61(20) A representational hierarchy 61(2) Function representation 63(6) Function spaces and vector spaces 69(3) An example: Working memory 72(7) System description 73(1) Design specification 74(3) Implementation 77(1) Discussion 78(1) Summary 79(2) Temporal representation in spiking neurons 81(48) The leaky integrate-and-fire (LIF) neuron 81(8) Introduction 81(2) Characterizing the LIF neuron 83(5) Strengths and weaknesses of the LIF neuron model 88(1) Temporal codes in neurons 89(3) Decoding neural spikes 92(13) Introduction 92(2) Neuron pairs 94(2) Representing time dependent signals with spikes 96(7) Discussion 103(2) Information transmission in LIF neurons 105(10) Finding optimal decoders in LIF neurons 105(4) Information transmission 109(5) Discussion 114(1) More complex single neuron models 115(12) Adapting LIF neuron 116(2) θ-neuron 118(5) Adapting, conductance-based neuron 123(3) Discussion 126(1) Summary 127(2) Population-temporal representation 129(14) Putting time and populations together again 129(3) Noise and precision: Dealing with distortions 132(4) An example: Eye position revisited 136(3) Implementation 136(1) Discussion 137(2) Summary 139(4) II TRANSFORMATION Feed-forward transformations 143(42) Linear transformations of scalars 143(8) A communication channel 143(5) Adding two variables 148(3) Linear transformations of vectors 151(2) Nonlinear transformations 153(7) Multiplying two variables 154(6) Negative weights and neural inhibition 160(8) Analysis 161(5) Discussion 166(2) An example: The vestibular system 168(14) System description 169(5) Design specification 174(1) Implementation 175(5) Discussion 180(2) Summary 182(3) Analyzing representation and transformation 185(34) Basis vectors and basis functions 185(7) Decomposing Γ 192(4) Determining possible transformations 196(10) Linear tuning curves 200(4) Gaussian tuning curves 204(2) Quantifying representation 206(4) Representational capacity 206(2) Useful representation 208(2) The importance of diversity 210(6) Summary 216(3) Dynamic transformations 219(56) Control theory and neural models 221(11) Introduction to control theory 221(1) A control theoretic description of neural populations 222(3) Revisiting levels of analysis 225(5) Three principles of neural engineering quantified 230(2) An example: Controlling eye position 232(12) Implementation 233(7) Discussion 240(4) An example: Working memory 244(6) Introduction 244(1) Implementation 244(1) Dynamics of the vector representation 244(1) Simulation results 245(3) Discussion 248(2) Attractor networks 250(10) Introduction 250(4) Generalizing representation 254(2) Generalizing dynamics 256(2) Discussion 258(2) An example: Lamprey locomotion 260(13) Introduction 260(1) System description 261(3) Design specification 264(1) Implementation 265(6) Discussion 271(2) Summary 273(2) Statistical inference and learning 275(26) Statistical inference and neurobiological systems 275(6) An example: Interpreting ambiguous input 281(2) An example: Parameter estimation 283(4) An example: Kalman filtering 287(6) Two versions of the Kalman filter 288(3) Discussion 291(2) Learning 293(7) Learning a communication channel 294(4) Learning from learning 298(2) Summary 300(1) Appendix A: Chapter 2 derivations 301(2) A.1 Determining optimal decoding weights 301(2) Appendix B: Chapter 4 derivations 303(10) B.1 Opponency and linearity 303(1) B.2 Leaky integrate-and-fire model derivations 303(2) B.3 Optimal filter analysis with a sliding window 305(4) B.4 Information transmission of linear estimators for nonlinear systems 309(4) Appendix C: Chapter 5 derivations 313(4) C.1 Residual fluctuations due to spike trains 313(4) Appendix D: Chapter 6 derivations 317(2) D.1 Coincidence detection 317(2) Appendix E: Chapter 7 derivations 319(8) E.1 Practical considerations for finding linear decoders for x and f(x) 319(4) E.2 Finding the useful representational space 323(4) Appendix F: Chapter 8 derivations 327(8) F.1 Synaptic dynamics dominate neural dynamics 327(1) F.2 Derivations for the lamprey model 327(8) F.2.1 Determining muscle tension 327(2) F.2.2 Error 329(2) F.2.3 Oscillator dynamics 331(1) F.2.4 Coordinate changes with matrices 332(1) F.2.5 Projection matrices 333(2) References 335(16) Index 351
Ingenaaid | 356 pagina's | Engels
1e druk | Verschenen in 2004
Rubriek: