Neural Network Toolbox |
|
Index
- ADALINE network
- decision boundary <1> <2>
- Adaption
- custom function
- function
- parameters
- Adaptive filter
- example
- noise cancellation example
- prediction application
- prediction example
- training
- Adaptive linear networks <1> <2>
- Amplitude detection
- Applications
- adaptive filtering
- aerospace
- automotive
- banking
- defense
- electronics
- entertainment
- financial
- insurance
- manufacturing
- medical <1> <2>
- oil and gas exploration
- robotics
- speech
- telecommunications
- transportation
- Architecture
- bias connection <1> <2>
- input connection <1> <2>
- input delays
- layer connection <1> <2>
- layer delays
- number of inputs <1> <2>
- number of layers <1> <2>
- number of outputs <1> <2>
- number of targets <1> <2>
- output connection <1> <2>
- target connection <1> <2>
- Backpropagation <1> <2>
- algorithm
- example
- Backtracking search
- Batch training <1> <2> <3>
- dynamic networks
- static networks
- Bayesian framework
- benchmark <1> <2>
- BFGS quasi-Newton algorithm
- Bias
- connection
- definition
- initialization function
- learning
- learning function
- learning parameters
- subobject <1> <2>
- value <1> <2>
- Box distance
- Brent's search
- Cell array
- derivatives
- errors
- initial layer delay states
- input P
- input vectors
- inputs and targets
- inputs property
- layer targets
- layers property
- matrix of concurrent vectors
- matrix of sequential vectors
- sequence of outputs
- sequential inputs
- tap delayed inputs
- weight matrices and bias vectors
- Charalambous' search
- Classification
- input vectors
- linear
- regions
- Code
- mathematical equivalents
- perceptron network
- writing
- Competitive layer
- Competitive neural network
- example
- Competitive transfer function <1> <2> <3>
- Concurrent inputs <1> <2>
- Conjugate gradient algorithm
- Fletcher-Reeves update
- Polak-Ribiere update
- Powell-Beale restarts
- scaled
- Continuous stirred tank reactor
- Control
- control design
- electromagnet <1> <2>
- feedback linearization <1> <2>
- model predictive control <1> <2> <3> <4> <5> <6>
- model reference control <1> <2> <3> <4> <5> <6>
- NARMA-L2 <1> <2> <3> <4> <5> <6>
- plant <1> <2> <3>
- robot arm
- time horizon
- training data
- CSTR
- Custom
- neural network
- Dead neurons
- Decision boundary <1> <2>
- definition
- Demonstrations
- appelm1
- applin3
- definition
- demohop1
- demohop2
- demorb4
- nnd10lc
- nnd11gn
- nnd12cg
- nnd12m
- nnd12mo
- nnd12sd1 <1> <2>
- nnd12vl
- Distance <1> <2>
- box
- custom function
- Euclidean
- link
- Manhattan
- tuning phase
- Dynamic network <1> <2>
- Dynamic networks
- training <1> <2>
- Early stopping
- Electromagnet <1> <2>
- Elman network
- recurrent connection
- Euclidean distance
- Export
- networks <1> <2>
- training data
- Feedback linearization <1> <2>
- Feedforward network
- Finite impulse response filter <1> <2>
- Fletcher-Reeves update
- Generalization
- regularization
- Generalized regression network
- Golden section search
- Gradient descent algorithm <1> <2>
- batch
- with momentum <1> <2>
- Graphical User Interface
- Gridtop topology
- Hagan, Martin <1> <2>
- Hard limit transfer function <1> <2> <3>
- Heuristic techniques
- Hidden layer
- definition
- Home neuron
- Hopfield network
- architecture
- design equilibrium point
- solution trajectories
- stable equilibrium point
- target equilibrium points
- Horizon
- Hybrid bisection-cubic search
- Import
- networks <1> <2>
- training data <1> <2>
- Incremental training
- Initial step size function
- Initialization
- additional functions
- custom function
- definition
- function
- parameters <1> <2>
- Input
- concurrent
- connection
- number
- range
- size
- subobject <1> <2> <3>
- Input vector
- classification
- dimension reduction
- distance
- outlier
- topology
- Input weights
- definition
- Inputs
- concurrent
- sequential <1> <2>
- Installation
- guide
- Jacobian matrix
- Kohonen learning rule
- Lambda parameter
- Layer
- connection
- dimensions
- distance function
- distances
- initialization function
- net input function
- number
- positions
- size
- subobject
- topology function
- transfer function
- Layer weights
- definition
- Learning rate
- adaptive
- maximum stable
- optimal
- ordering phase
- too large
- tuning phase
- Learning rules
- custom
- Hebb
- Hebb with decay
- instar
- Kohonen
- outstar
- supervised learning
- unsupervised learning
- Widrow-Hoff <1> <2> <3> <4> <5> <6>
- Learning vector quantization
- creation
- learning rule <1> <2>
- LVQ network
- subclasses
- target classes
- union of two sub-classes
- Least mean square error <1> <2>
- Levenberg-Marquardt algorithm
- reduced memory
- Line search function
- backtracking search
- Brent's search
- Charalambous' search
- Golden section search
- hybrid bisection-cubic search
- Linear networks
- design
- Linear transfer function <1> <2> <3> <4> <5>
- Linearly dependent vectors
- Link distance
- Log-sigmoid transfer function <1> <2> <3>
- MADALINE
- Magnet <1> <2>
- Manhattan distance
- Maximum performance increase
- Maximum step size
- Mean square error function
- least <1> <2>
- Memory reduction
- Model predictive control <1> <2> <3> <4> <5> <6>
- Model reference control <1> <2> <3> <4> <5> <6>
- Momentum constant
- Mu parameter
- NARMA-L2 <1> <2> <3> <4> <5> <6>
- Neighborhood
- Net input function
- custom
- Network
- definition
- dynamic <1> <2>
- static
- Network Function
- Network layer
- competitive
- definition
- Network/Data Manager window
- Neural network
- adaptive linear <1> <2>
- competitive
- custom
- definition
- feedforward
- generalized regression
- multiple layer <1> <2> <3>
- one layer <1> <2> <3> <4> <5>
- probabilistic
- radial basis
- self organizing
- self-organizing feature map
- Neural Network Design
- Instructor's Manual
- Overheads
- Neuron
- dead (not allocated)
- definition
- home
- Newton's method
- NN predictive control <1> <2> <3> <4> <5> <6>
- Normalization
- inputs and targets
- mean and standard deviation
- Notation
- abbreviated <1> <2>
- layer
- transfer function symbols <1> <2>
- Numerical optimization
- One step secant algorithm
- Ordering phase learning rate
- Outlier input vector
- Output
- connection
- number
- size
- subobject <1> <2>
- Output layer
- definition
- linear
- Overdetermined systems
- Overfitting
- Pass
- definition
- pattern recognition
- Perceptron learning rule <1> <2>
- normalized
- Perceptron network
- code
- creation
- limitations
- Performance function
- custom
- modified
- parameters
- Plant <1> <2> <3>
- Plant identification <1> <2> <3> <4>
- Polak-Ribiere update
- Postprocessing
- Post-training analysis
- Powell-Beale restarts
- Predictive control <1> <2> <3> <4> <5> <6>
- Preprocessing
- Principal component analysis
- Probabilistic neural network
- design
- Quasi-Newton algorithm
- BFGS
- Radial basis
- design
- efficient network
- function
- network
- network design
- Radial basis transfer function
- Recurrent connection
- Recurrent networks
- Regularization
- automated
- Resilient backpropagation
- Robot arm
- Self-organizing feature map (SOFM) network
- neighborhood
- one-dimensional example
- two-dimensional example
- Self-organizing networks
- Sequential inputs <1> <2>
- S-function
- Sigma parameter
- Simulation
- definition
- SIMULINK
- generating networks
- NNT block set <1> <2>
- Spread constant
- Squashing functions
- Static network
- Static networks
- batch training
- training
- Subobject
- bias <1> <2> <3>
- input <1> <2> <3> <4>
- layer <1> <2>
- output <1> <2> <3>
- target <1> <2> <3>
- weight <1> <2> <3> <4> <5>
- Supervised learning
- target output
- training set
- System identification <1> <2> <3> <4> <5> <6>
- Tan-sigmoid transfer function
- Tapped delay line <1> <2>
- Target
- connection
- number
- size
- subobject <1> <2>
- Target output
- Time horizon
- Topologies
- gridtop
- Topologies for SOFM neuron locations
- Topology
- custom function
- Training
- batch <1> <2>
- competitive networks
- custom function
- definition <1> <2>
- efficient
- faster
- function
- incremental
- ordering phase
- parameters
- post-training analysis
- self organizing feature map
- styles
- tuning phase
- Training data
- Training set
- Training styles
- Training with noise
- Transfer function
- competitive <1> <2> <3>
- custom
- definition
- derivatives
- hard limit <1> <2>
- linear <1> <2> <3>
- log-sigmoid <1> <2> <3>
- radial basis
- saturating linear
- soft maximum
- tan-sigmoid
- triangular basis
- Transformation matrix
- Tuning phase learning rate
- Tuning phase neighborhood distance
- Underdetermined systems
- Unsupervised learning
- Variable learning rate algorithm
- Vectors
- linearly dependent
- Weight
- definition
- delays <1> <2>
- initialization function <1> <2>
- learning <1> <2>
- learning function <1> <2>
- learning parameters <1> <2>
- size <1> <2>
- subobject <1> <2> <3>
- value <1> <2> <3>
- weight function <1> <2>
- Weight function
- custom
- Weight matrix
- definition
- Widrow-Hoff learning rule <1> <2> <3> <4> <5> <6>
- Workspace (command line)
 | Argument Checking | |