What I know is that a function (maybe?) is called a linear transformation $T$ if for any scalar $k$, $kT(A) = T(kA)$ and $T(A) + T(B) = T(A + B)$ for any matrices $A$ and $B$ in the domain of $T$. I am not sure if I am correct or not. If I am wrong, correct me; if I am correct, provide me other things I should note for linear transformation.
视频信息
答案文本
视频字幕
A linear transformation is a function that preserves vector addition and scalar multiplication. Formally, a function T is a linear transformation if it satisfies two key properties: First, homogeneity, which means that scaling a vector and then applying the transformation gives the same result as applying the transformation first and then scaling. Second, additivity, which means that the transformation of a sum equals the sum of the transformations. In this visualization, you can see how vectors A and B are transformed, and how their sum A plus B transforms in a way that preserves these linear properties.
Linear transformations have several important properties. First, they preserve linear combinations, meaning that the transformation of a weighted sum equals the weighted sum of the transformations. Second, they always map the zero vector to the zero vector. Third, any linear transformation between finite-dimensional vector spaces can be represented by a matrix. In this visualization, you can see how a matrix transforms the coordinate system. The basis vectors and grid lines of the original space are mapped to new positions in the transformed space, illustrating how the matrix acts as a linear transformation.
Two important subspaces associated with a linear transformation are the kernel and the image. The kernel, also called the null space, consists of all vectors in the domain that map to the zero vector in the codomain. It's always a subspace of the domain. The image, or range, consists of all possible outputs of the transformation. It's always a subspace of the codomain. The rank-nullity theorem establishes a fundamental relationship: the dimension of the domain equals the sum of the dimensions of the kernel and the image. In this visualization, the red line represents the kernel of a transformation, where all vectors map to zero. The green line represents the image, showing all possible outputs of the transformation.
Let's explore some common examples of linear transformations. Rotation transforms vectors by rotating them around the origin by a fixed angle. Its matrix representation uses sine and cosine functions. Scaling stretches or compresses vectors along the coordinate axes. Shear transformations shift points parallel to an axis, with the amount of shift proportional to the distance from that axis. Projection maps vectors onto a subspace, such as projecting onto the x-axis. Each of these transformations can be represented by a specific matrix, and they all satisfy the properties of linear transformations: they preserve vector addition and scalar multiplication.
To summarize what we've learned about linear transformations: A linear transformation is a function that preserves vector addition and scalar multiplication, meaning it satisfies the two key properties you initially mentioned. Linear transformations always preserve linear combinations and map the zero vector to the zero vector. In finite-dimensional vector spaces, every linear transformation can be represented by a matrix. Two important subspaces associated with a linear transformation are the kernel, which is the set of vectors that map to zero, and the image, which is the set of all possible outputs. These subspaces are related by the rank-nullity theorem, which states that the dimension of the domain equals the sum of the dimensions of the kernel and the image. Common examples of linear transformations include rotation, scaling, shear, and projection, each with its own matrix representation.