更新时间:2021-06-18 18:56:13
封面
版权信息
About Packt
Why subscribe?
Contributors
About the author
About the reviewers
Packt is searching for authors like you
Preface
Who this book is for
What this book covers
To get the most out of this book
Download the color images
Conventions used
Get in touch
Reviews
Section 1: Essential Mathematics for Deep Learning
Linear Algebra
Comparing scalars and vectors
Linear equations
Solving linear equations in n-dimensions
Solving linear equations using elimination
Matrix operations
Adding matrices
Multiplying matrices
Inverse matrices
Matrix transpose
Permutations
Vector spaces and subspaces
Spaces
Subspaces
Linear maps
Image and kernel
Metric space and normed space
Inner product space
Matrix decompositions
Determinant
Eigenvalues and eigenvectors
Trace
Orthogonal matrices
Diagonalization and symmetric matrices
Singular value decomposition
Cholesky decomposition
Summary
Vector Calculus
Single variable calculus
Derivatives
Sum rule
Power rule
Trigonometric functions
First and second derivatives
Product rule
Quotient rule
Chain rule
Antiderivative
Integrals
The fundamental theorem of calculus
Substitution rule
Areas between curves
Integration by parts
Multivariable calculus
Partial derivatives
Vector calculus
Vector fields
Inverse functions
Probability and Statistics
Understanding the concepts in probability
Classical probability
Sampling with or without replacement
Multinomial coefficient
Stirling's formula
Independence
Discrete distributions
Conditional probability
Random variables
Variance
Multiple random variables
Continuous random variables
Joint distributions
More probability distributions
Normal distribution
Multivariate normal distribution
Bivariate normal distribution
Gamma distribution
Essential concepts in statistics
Estimation
Mean squared error
Sufficiency
Likelihood
Confidence intervals
Bayesian estimation
Hypothesis testing
Simple hypotheses
Composite hypothesis
The multivariate normal theory
Linear models