{"id":3433,"date":"2015-04-12T01:34:05","date_gmt":"2015-04-12T06:34:05","guid":{"rendered":"http:\/\/www.ssc.wisc.edu\/~jfrees\/?page_id=3433"},"modified":"2015-04-19T14:31:59","modified_gmt":"2015-04-19T19:31:59","slug":"matrix-multiplication","status":"publish","type":"page","link":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/regression\/basic-linear-regression\/2-9-technical-supplement-elements-of-matrix-algebra\/basic-operations\/matrix-multiplication\/","title":{"rendered":"Matrix Multiplication"},"content":{"rendered":"<p>In general, if \\(\\mathbf{A}\\) is a matrix of dimension \\(n\\times c\\) and \\( \\mathbf{B}\\) is a matrix of dimension \\(c\\times k\\), then \\(\\mathbf{C}=\\mathbf{AB }\\) is a matrix of dimension \\(n\\times k\\) and is defined by<br \/>\n\\begin{equation*} \\mathbf{C}=\\mathbf{AB=}\\left( \\sum_{s=1}^{c}a_{is}b_{sj}\\right) _{ij}. \\end{equation*} For example consider the \\(2\\times 2\\) matrices<br \/>\n\\begin{equation*} \\mathbf{A}=\\left( \\begin{array}{cc} 2 &#038; 5 \\\\ 4 &#038; 1 \\end{array} \\right) \\text{     }\\mathbf{B}=\\left( \\begin{array}{cc} 4 &#038; 6 \\\\ 8 &#038; 1 \\end{array} \\right) . \\end{equation*} The matrix \\(\\mathbf{AB}\\) has dimension \\(2\\times 2\\). To illustrate the calculation, consider the number in the first row and second column of \\( \\mathbf{AB}\\). By the rule presented above, with \\(i=1\\) and \\(j=2\\), the corresponding element of \\(\\mathbf{AB}\\) is \\( \\sum_{s=1}^2a_{1s}b_{s2}=a_{11}b_{12}+a_{12}b_{22}=2(6)+5(1)=17\\). The other calculations are summarized as<br \/>\n\\begin{equation*} \\mathbf{AB}=\\left( \\begin{array}{cc} 2(4)+5(8) &#038; 2(6)+5(1) \\\\ 4(4)+1(8) &#038; 4(6)+1(1) \\end{array} \\right) =\\left( \\begin{array}{cc} 48 &#038; 17 \\\\ 24 &#038; 25 \\end{array} \\right) . \\end{equation*} As another example, suppose<br \/>\n\\begin{equation*} \\mathbf{A}=\\left( \\begin{array}{ccc} 1 &#038; 2 &#038; 4 \\\\ 0 &#038; 5 &#038; 8 \\end{array} \\right) \\text{     }\\mathbf{B}=\\left( \\begin{array}{c} 3 \\\\ 5 \\\\ 2 \\end{array} \\right) . \\end{equation*} Because \\(\\mathbf{A}\\) has dimension \\(2\\times 3\\) and \\(\\mathbf{B}\\) has dimension \\(3\\times 1\\), this means that the product \\(\\mathbf{AB}\\) has dimension \\(2\\times 1\\).. The calculations are summarized as<br \/>\n\\begin{equation*} \\mathbf{AB}=\\left( \\begin{array}{c} 1(3)+2(5)+4(2) \\\\ 0(3)+5(5)+(2) \\end{array} \\right) =\\left( \\begin{array}{c} 21 \\\\ 41 \\end{array} \\right) . \\end{equation*} For some additional examples, we have \\begin{equation*} \\left( \\begin{array}{cc} 4 &#038; 2 \\\\ 5 &#038; 8 \\end{array} \\right) \\left( \\begin{array}{c} a_1 \\\\ a_2 \\end{array} \\right) =\\left( \\begin{array}{c} 4a_1+2a_2 \\\\ 5a_1+8a_2 \\end{array} \\right) . \\end{equation*} \\begin{equation*} \\left( \\begin{array}{ccc} 2 &#038; 3 &#038; 5 \\end{array} \\right) \\left( \\begin{array}{c} 2 \\\\ 3 \\\\ 5 \\end{array} \\right) =2^2+3^2+5^2=38\\text{     }\\left( \\begin{array}{c} 2 \\\\ 3 \\\\ 5 \\end{array} \\right) \\left( \\begin{array}{ccc} 2 &#038; 3 &#038; 5 \\end{array} \\right) =\\left( \\begin{array}{ccc} 4 &#038; 6 &#038; 10 \\\\ 6 &#038; 9 &#038; 15 \\\\ 10 &#038; 15 &#038; 25 \\end{array} \\right) . \\end{equation*} In general, you see that \\(\\mathbf{AB\\neq BA}\\) in matrix multiplication, unlike multiplication of scalars (real numbers). Further, we remark that the identity matrix serves the role of &#8220;one&#8221; in matrix multiplication, in that \\(\\mathbf{AI=A}\\) and \\(\\mathbf{IA=A}\\) for any matrix \\(\\mathbf{A}\\), providing that the dimensions are compatible to allow matrix multiplication. <\/p>\n<p><strong>Basic Linear Regression Example of Matrix Multiplication<\/strong>. Define<br \/>\n\\begin{equation*} \\mathbf{X}=\\left( \\begin{array}{cc} 1 &#038; x_1 \\\\ \\vdots &#038; \\vdots \\\\ 1 &#038; x_n \\end{array} \\right) \\text{  and   }\\boldsymbol \\beta =\\left( \\begin{array}{c} \\beta_0 \\\\ \\beta_1 \\end{array} \\right) \\text{, to get  } \\mathbf X \\boldsymbol \\beta =\\left( \\text{  } \\begin{array}{c} \\beta_0+\\beta_1x_1 \\\\ \\vdots \\\\ \\beta_0+\\beta_1x_n \\end{array} \\right) =\\mathbf{\\mathrm{E~}\\mathbf{y.}} \\end{equation*} Thus, this yields the familiar matrix expression of the regression model, \\( \\mathbf{y=X } \\boldsymbol \\beta + \\boldsymbol \\varepsilon .\\) Other useful quantities include<br \/>\n\\begin{equation*} \\mathbf{y}^{\\prime }\\mathbf{y=}\\left( \\begin{array}{ccc} y_1 &#038; \\cdots &#038; y_n \\end{array} \\right) \\left( \\begin{array}{c} y_1 \\\\ \\vdots \\\\ y_n \\end{array} \\right) =y_1^2+\\cdots +y_n^2=\\sum_{i=1}^{n}y_i^2, \\end{equation*}<br \/>\n\\begin{equation*} \\mathbf{X}^{\\prime }\\mathbf{y=}\\left( \\begin{array}{ccc} 1 &#038; \\cdots &#038; 1 \\\\ x_1 &#038; \\cdots &#038; x_n \\end{array} \\right) \\left( \\begin{array}{c} y_1 \\\\ \\vdots \\\\ y_n \\end{array} \\right) =\\left( \\begin{array}{c} \\sum_{i=1}^{n}y_i \\\\ \\sum_{i=1}^{n}x_iy_i \\end{array} \\right) \\end{equation*} and<br \/>\n\\begin{equation*} \\mathbf{X}^{\\prime }\\mathbf{X=}\\left( \\begin{array}{ccc} 1 &#038; \\cdots &#038; 1 \\\\ x_1 &#038; \\cdots &#038; x_n \\end{array} \\right) \\left( \\begin{array}{cc} 1 &#038; x_1 \\\\ \\vdots &#038; \\vdots \\\\ 1 &#038; x_n \\end{array} \\right) =\\left( \\begin{array}{cc} n &#038; \\sum_{i=1}^{n}x_i \\\\ \\sum_{i=1}^{n}x_i &#038; \\sum_{i=1}^{n} x_i^2 \\end{array} \\right) . \\end{equation*} Note that \\(\\mathbf{X}^{\\prime }\\mathbf{X}\\) is a symmetric matrix. <\/p>\n<p><div class=\"alignleft\"><a href=\"https:\/\/users.ssc.wisc.edu\/~ewfrees\/regression\/basic-linear-regression\/2-9-technical-supplement-elements-of-matrix-algebra\/basic-operations\/addition-and-subtraction-of-matrices\/\" title=\"Addition and Subtraction of Matrices\">&#9668 Previous page<\/a><\/div><div class=\"alignright\"><a href=\"https:\/\/users.ssc.wisc.edu\/~ewfrees\/regression\/basic-linear-regression\/2-9-technical-supplement-elements-of-matrix-algebra\/basic-operations\/matrix-inverses\/\" title=\"Matrix Inverses\">Next page &#9658<\/a><\/div><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In general, if \\(\\mathbf{A}\\) is a matrix of dimension \\(n\\times c\\) and \\( \\mathbf{B}\\) is a matrix of dimension \\(c\\times k\\), then \\(\\mathbf{C}=\\mathbf{AB }\\) is a matrix of dimension \\(n\\times k\\) and is defined by &hellip;<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":3423,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"","meta":{"jetpack_post_was_ever_published":false},"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/P8cLPd-Tn","acf":[],"_links":{"self":[{"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages\/3433"}],"collection":[{"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/comments?post=3433"}],"version-history":[{"count":2,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages\/3433\/revisions"}],"predecessor-version":[{"id":3435,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages\/3433\/revisions\/3435"}],"up":[{"embeddable":true,"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/pages\/3423"}],"wp:attachment":[{"href":"https:\/\/users.ssc.wisc.edu\/~ewfrees\/wp-json\/wp\/v2\/media?parent=3433"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}