Linear Algebra for Data Science — Matrices

Basic matrix concepts used in data science and machine learning

Benjamin Obi Tayo Ph.D.
4 min readMar 17

--

Image by Author

About the Author

Benjamin O. Tayo is a data science educator, tutor, coach, mentor, and consultant. Contact me for more information about our services and pricing: benjaminobi@gmail.com

Dr. Tayo has written close to 300 articles and tutorials in data science for educating the general public. Support Dr. Tayo’s educational mission using the links below:

PayPal: https://www.paypal.me/BenjaminTayo

CashApp: https://cash.app/$BenjaminTayo

INTRODUCTION

After going through this article, the reader should learn the following:

  • Definition of Matrices
  • Matrix multiplication
  • Apply matrix multiplication to linear regression
  • Inverse of a matrix
  • Eigenvalues
  • Eigenvectors

To learn more about the use of vectors in data science and machine learning, see the article below:

Matrices

A matrix is a two-dimensional collection of elements. A matrix is represented by its number of rows and columns. For example, an n x m matrix A has n rows and m columns:

Matrix Multiplication

Matrix multiplication plays an important role in data science and machine learning. For matrix multiplication between two matrices to be well defined, the two matrices must be compatible, that is, the number of columns of matrix A must be equal to the number of rows of matrix B. Matrix multiplication is not commutative, that is AB in not equal to BA.

Let A be an n x p matrix, and B a p x m matrix

Then the product matrix C = AB is an n x m matrix with elements given as

--

--

Benjamin Obi Tayo Ph.D.

Dr. Tayo is a data science educator, tutor, coach, mentor, and consultant. Contact me for more information about our services and pricing: benjaminobi@gmail.com