The dot product can be used to write the sum:
$$\sum_{i=1}^n a_i b_i$$
as $$a^T b$$
Is there an equivalent notation for the following sum:
$$\sum_{i=1}^n a_i b_i c_i$$
$\endgroup$4 Answers
$\begingroup$$${\sum_{i=1}^n} a_ib_ic_i= {\bf a}^T{\bf C}{\bf b} \hspace{0.25cm}\textrm{where}\hspace{0.25cm} \left\{ \begin{array}{}C_{ii} = c_i \\ C_{ij} = 0\end{array} \right\} , i\neq j$$
$\bf C$ is a diagonal gram matrix for a scalar product where the $\bf c$ column-vector is on the diagonal. Note however the symmetry that we may as well choose $\bf A$ or $\bf B$ to be such gram matrices defining the scalar product between the other two vectors.
This can be expanded for $N$ vectors by having $N-2$ matrices as a product "in the middle" and a row vector from the left and a column vector from the right. $${\bf a}^T{\bf BCd} = \sum_{i=1}^n a_ib_ic_id_i$$ and nothing prevents us from creating more such matrices in the middle without limit.
EDIT:A more general way to write it would be:
$$\sum_{i}\prod_{k=1}^N({\bf a_k})_i = \text{Tr}\left(\prod_{k=1}^N{\bf A_k}\right)$$ A trace of a product of matrices where we enumerate the vectors $\bf a_i$ and corresponding matrix $\bf A_i$. This is just to be able to more practically write them with the product and sum notations.
$\endgroup$ $\begingroup$There's not really a specific notation for the sum you've written down, because it doesn't come up all that often. One particular reason why not is that it's coordinate-dependent: what result you get depends on the specific choice of basis vectors you use.
To see this, let's look at 2-dimensional vectors with a standard $\{\langle1,0\rangle,\langle0,1\rangle\}$ basis. Then the 'triple product' of the vectors $\vec{a}=\langle4,1\rangle$, $\vec{b}=\langle2,5\rangle$ and $\vec{c}=\langle3,0\rangle$ is $4\cdot2\cdot3+1\cdot5\cdot0=24$. But now let's look at a coordinate system, $\vec{e_1}=\langle s, s\rangle$ and $\vec{e_2}=\langle-s, s\rangle$ where $s=\frac1{\sqrt{2}}$: in this system we have $\vec{a}=[5s, -3s]$, $\vec{b}=[7s, 3s]$ and $\vec{c}=[3s, -3s]$, and the sum is $(5s)\cdot(7s)\cdot(3s)+(-3s)\cdot(3s)\cdot(-3s)=105s^3+27s^3=66s$ (since $s^2=\frac12$). By contrast, if you look at the 'usual' dot product of any two of these vectors, you'll find that it's the same whatever basis you use.
On the other hand, for three-dimensional vectors there is a well-defined 'triple product' (although not the formula you give): it can be defined as either the product $\vec{a}\cdot(\vec{b}\times\vec{c})$ or the determinant of the $3\times3$ matrix whose entries are the coordinates of $a$, $b$ and $c$. Note that both of these definitions appear to use coordinates, but they're actually coordinate-independent; a different definition that makes the coordinate-independence clear is that the triple product is the signed volume of the parallelepiped spanned by $\vec{a}$, $\vec{b}$ and $\vec{c}$. This is exactly the analog of the dot product (in 2 dimensions), which is the signed area of the parallelogram spanned by the two vectors.
$\endgroup$ $\begingroup$You could use the Hadamard product () $\circ$, which is just an element-wise product, to have:
$$(a\circ b)^Tc = \sum_{i=1}^n a_ib_ic_i$$
This just to give some "known" notation. Of course you could come up with your own notation for the sum.
$\endgroup$ $\begingroup$Not really.
Various summation-of-product concepts are natural for linear forms (matrices, linear transformations) and for tensors. For example, the "Einstein summation convention" used in physics says that when we write $$ A_a^{i}B^{e}_{i}$$ we really mean the tensor product summing over the repeated index $$ \sum_i A_a^{i}B^{e}_{i}$$
But that is always two-index summation.
In particular, you won't find any combination of matrix products, sums, determinants, traces, and transposes that for general dimension $n$ will yield $$ \sum_{i=1}^n a_ib_ic_i$$
The reason that quadratic expressions are important and higher-order expressions are not is nothing fundamental, I suppose.
$\endgroup$