1
Let \((x_1,y_1)\) and \((x_2,y_2)\) both be solutions of the equation
\begin{gather} 4x - 5y = 0.\tag{4.1.1} \end{gather}Your proof portfolio is a set of conceptual problems and proofs that you'll write and revise throughout the semester. Choose your problems from the following list, typesetting your solutions in Overleaf. Include both a PDF copy of your entire proof portfolio and its Overleaf read-and-edit link with each course portfolio submission.
The exercises to submit on each portfolio's due date must be chosen from among the following. You are free to add to these during the revision process if you wish, but must on each due date have at least 3-5 of the listed exercises in your portfolio:
To get started with writing your document in Overleaf, the brief video tutorial below can help. Additional tutorials are available here: https://www.overleaf.com/help/category/video_tutorials
Let \((x_1,y_1)\) and \((x_2,y_2)\) both be solutions of the equation
\begin{gather} 4x - 5y = 0.\tag{4.1.1} \end{gather}Let \((x_1,y_1)\) and \((x_2,y_2)\) both be solutions of the equation
\begin{gather} 4x - 5y = 12.\tag{4.1.2} \end{gather}Consider the equation
\begin{gather} x^2 + y = 16.\tag{4.1.3} \end{gather}Supply an argument in favor of the following statement: The equation
\begin{equation*} x^3-3x^2y+3xy^2-y^3=0 \end{equation*}is a linear equation.
It'll take some "rainy-day algebra," but can you verify the conclusions of Exercise 4.1.1 for this equation?
Prove by examples that a linear system of 4 equations and 3 unknowns may have any of the following:
Begin by writing the reduced row-echelon form for each case. You can then use row-operations to "scramble" the system into something more interesting.
Prove or disprove that a linear system of 3 equations and 4 unknowns may have each of the following:
Think carefully: When is a single example enough? And when do you need a universal argument?
Prove that if a linear system of equations has a unique solution, then it has no free variables.
Consider a linear system of \(m\) equations and \(n\) unknowns.
(a) Your answer might depend on which of \(m\) or \(n\) is greater. (c) This is a quick proof if you can unite your answers to parts (a)-(b) with Exercise 4.1.7. Don't reinvent the wheel if you've proven those results first!
Let \({\bf v_1}, {\bf v_2}, {\bf v_3}\) be arbitrary vectors in \(\mathbb{R}^3\text{.}\)
There is a way to do part (b) very quickly, if you can establish that the answer to this question is really the solution of some linear system! (Why? What do we know about how many solutions a linear system may have?)
Suppose \({\bf v}_1,{\bf v}_2,\ldots,{\bf v}_n \in \mathbb{R}^m\) is a collection of \(m\)-dimensional vectors, and suppose that the matrix
\begin{equation*} \left[ \begin{array}{cccc} \vdots \amp \vdots \amp\amp \vdots \\ {\bf v}_1 \amp {\bf v}_2 \amp \cdots \amp {\bf v}_n\\ \vdots \amp \vdots \amp \amp \vdots \end{array}\right] \end{equation*}has a pivot position in every row.
Part (a) makes a universal claim ("for all \({\bf b\text{,}\) we have...") and so it must be proven in generality. By contrast, part (b) asks you to disprove a universal claim ("there exists \({\bf b}\) such that we don't have..."). So you can prove (b) with a specific example if you wish. Just be sure you explain what it is about your example that makes it work.
Let \(A\) be an \(m\times n\) matrix and \({\bf b} \in \mathbb{R}^m\) be a vector.
If you find yourself writing down any actual matrices or even coordinate components, you're probably working too hard. These arguments need only the linearity properties of matrix multiplication to make them work.
Let \(A\) be an \(n\times n\) square matrix. Suppose that \({\bf v}_1, {\bf v}_2 \in {\mathbb R}^n\) are vectors which have the property that
\begin{gather} A{\bf v}_1 = {\bf v_1} \text{ and } A{\bf v}_2 = \frac12 {\bf v}_2.\tag{4.1.4} \end{gather}Let \({\bf x} \in {\rm span}\{{\bf v}_1,{\bf v}_2\}.\) Prove that, in the limit as \(n\to\infty\text{,}\)
\begin{equation*} \lim_{n\to\infty} A^n\, {\bf x} = {\bf v}_1. \end{equation*}Let \(A\) be an arbitrary \(3\times 4\) matrix and \(B\) be an arbitrary \(4\times 5\) matrix. The dimensions of these matrices make it possible to form the product matrix \(P = AB\text{.}\)
"Do the columns span the whole codomain" is really a question about "is the linear system always consistent?" Try treating the equation \(P{\bf x} = {\bf b}\) in two steps:
\begin{align*} A\, (\underbrace{B{\bf x}}_{\bf y}) \amp = {\bf b} \amp\amp \text{can we always solve for y? why?}\\ B{\bf x} \amp = {\bf y} \amp\amp \text{can we always solve for x? why?} \end{align*}Let \({\bf v}_1,{\bf v}_2,{\bf w}\) be a set of vectors that is linearly dependent.
Remember that if \(X,Y\) are sets, proving that \(X=Y\) requires you to prove two containments:
Let \(A\) be an \(n\times n\) square matrix, and \({\bf e}_1,{\bf e}_2,\ldots,{\bf e}_n\) be the standard vectors in \(\mathbb{R}^n.\)
Prove that if \(A{\bf e}_1, A{\bf e}_2,\ldots, A{\bf e}_n\) are linearly independent, then the equation
\begin{equation*} A{\bf x} = {\bf b} \end{equation*}has a unique solution for all \({\bf b}.\)
What do you get when you multiply any matrix by the first standard vector \({\bf e}_1\text{?}\) Try it. Does that always work? What does that tell you here?
Let \(A\) be a \(3\times 4\) matrix and \(B\) be a \(4\times 5\) matrix. Suppose that the product
\begin{equation*} P = AB = O \end{equation*}is a matrix whose entries are all zero.
If you proved Exercise 4.1.13, you can use the result to make part (a) very quick. In part (b) you can also use the same hint from that exercise:
\begin{align*} A\, (\underbrace{B{\bf x}}_{\bf y}) \amp = 0 \amp\amp \text{which y will satisfy this?}\\ B{\bf x} \amp = {\bf y} \amp\amp \text{how to ensure that Bx is always one of those y's?} \end{align*}