Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .config/cspell.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ dictionaries:
- web-services
- custom-dict-acronyms
- custom-dict-names
- custom-dict-other
- custom-dict-words-en
- custom-dict-words-es

Expand All @@ -70,6 +71,8 @@ dictionaryDefinitions:
path: "./custom-dict-acronyms.txt"
- name: custom-dict-names
path: "./custom-dict-names.txt"
- name: custom-dict-other
path: "./custom-dict-other.txt"
- name: custom-dict-words-en
path: "./custom-dict-words-en.txt"
- name: custom-dict-words-es
Expand Down
43 changes: 1 addition & 42 deletions .config/custom-dict-names.txt
Original file line number Diff line number Diff line change
@@ -1,85 +1,63 @@
fibonacci
Fibonacci
jnonino
Liskov



Aguilar
Alahyari
AMACOM
Apress
Arie
Artech

Badgett
Basili
Beedle
Bennekum
bmatrix
Boehm
Boersma
Booch
Brookshear

Chacon
Charette
Christel
Codecademy
codeimporter
Coghlan
Cormen

Deitel

Easterbrook
elif
Eloranta
Erlikh
ESEM
Euromicro

Fewster
fibonacci
franca

Gagne
Grenning
Grinberg

Hannes
Hapke
Harvill
Hennessy
hextra
Highsmith
Hiva
Holmstrom

Ifrah
infty
Itkonen

Jalote
Jaskiel
jnonino
Jouni
Joyanes
Juha

Kaner
Kazman
Kerzner
Kimmo
Krivy

Laplante
Leanpub
Leiserson
Leppanen
Lianping
Liraz
Liskov

Maberly
Makinen
Mannisto
Expand All @@ -88,46 +66,35 @@ Marick
Markku
Markkula
Marko
mathbb
mathbf
Matthes
Matyas
Mika
Mikolov
Moroney

Nonino
Numpy
Nuseibeh
NXOR

Oivo
Olsson
OOPSLA
Oram

Packt
Pagels
Parnas
Pekka
Pfleeger
Pilar
pmatrix
Pulkkinen
Pylint
pytest

Raoul
Rashka
Replit
Rossum
Rumbaugh

Sams
Schwaber
SDLC
Sebesta
sectioncards
Siewiorek
Silberschatz
Simo
Expand All @@ -136,20 +103,12 @@ Sommerville
Stolt
Straub
Swarz

Turula

UMAP

Veli
Vlissides

WESCON
Wiegers
wsgi

XNOR

Yourdon

Zelkowitz
12 changes: 12 additions & 0 deletions .config/custom-dict-other.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
bmatrix
codeimporter
elif
hextra
mathbb
mathbf
pmatrix
pylint
pytest
sectioncards
vmatrix
wsgi
1 change: 1 addition & 0 deletions .config/custom-dict-words-en.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
anticommutativity
explainability
regulariser
sidelining
2 changes: 2 additions & 0 deletions .config/custom-dict-words-es.txt
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
anticonmutatividad
aprendé
autocompletado
bayesianos
contrastivo
convolucionales
curvá
dejá
descripto
dimensionalidad
Expand Down
45 changes: 45 additions & 0 deletions content/ai/math/algebra/vectors/index.en.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,6 +260,51 @@ $$
works precisely because of this geometry. Semantic relationships are encoded as *directions* in vector space, and finding `queen` means finding the vector whose cosine similarity to the query vector is maximized. Every modern embedding model (BERT, GPT, sentence-transformers) inherits this geometric philosophy. Next time you read something about word representation in vector spaces, remember they are talking about the same geometry we just derived.
{{< /callout >}}

### The cross product

The dot product takes two vectors and returns a **scalar**. The **cross product** takes two vectors in \(\mathbb{R}^3\) and returns a **vector**, one that is perpendicular to both inputs. It is defined only in three (and seven) dimensions, which makes it more geometrically specialised than the dot product.

Given \(\mathbf{u} = [u_1, u_2, u_3]^T\) and \(\mathbf{v} = [v_1, v_2, v_3]^T\), the cross product \(\mathbf{u} \times \mathbf{v}\) is computed by expanding the following symbolic determinant:

$$
\mathbf{u} \times \mathbf{v} = \begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix}
$$

Expanding along the first row:

$$
\mathbf{u} \times \mathbf{v} = \mathbf{e}_1(u_2 v_3 - u_3 v_2) - \mathbf{e}_2(u_1 v_3 - u_3 v_1) + \mathbf{e}_3(u_1 v_2 - u_2 v_1)
$$

$$
\boxed{\mathbf{u} \times \mathbf{v} = \begin{bmatrix} u_2 v_3 - u_3 v_2 \\ u_3 v_1 - u_1 v_3 \\ u_1 v_2 - u_2 v_1 \end{bmatrix}}
$$

{{< callout type="info" >}}
In plain English: each component of the result is a \(2 \times 2\) determinant built from the other two components of the inputs. The pattern is cyclic: \((2,3)\), \((3,1)\), \((1,2)\).
{{< /callout >}}

**Two geometric facts define the cross product completely**:

**Direction:** \(\mathbf{u} \times \mathbf{v}\) is always orthogonal to both \(\mathbf{u}\) and \(\mathbf{v}\). You can verify this directly: \((\mathbf{u} \times \mathbf{v}) \cdot \mathbf{u} = 0\) and \((\mathbf{u} \times \mathbf{v}) \cdot \mathbf{v} = 0\). The orientation follows the **right-hand rule**: curl the fingers of your right hand from \(\mathbf{u}\) toward \(\mathbf{v}\), and your thumb points in the direction of \(\mathbf{u} \times \mathbf{v}\).

**Magnitude:** The length of the result equals the area of the parallelogram spanned by \(\mathbf{u}\) and \(\mathbf{v}\), which can be expressed as:

$$\|\mathbf{u} \times \mathbf{v}\| = \|\mathbf{u}\|\|\mathbf{v}\|\sin\theta$$


{{< callout type="info" >}}
When \(\mathbf{u}\) and \(\mathbf{v}\) are parallel (\(\theta = 0°\)), the parallelogram is flat and the cross product is the zero vector. When they are perpendicular (\(\theta = 90°\)), the parallelogram has maximum area and \(\|\mathbf{u} \times \mathbf{v}\|\) is maximised. This is the exact opposite behaviour to the dot product, which is maximised when vectors are parallel and zero when perpendicular.
{{< /callout >}}

**Key algebraic property, anticommutativity**:

$$
\mathbf{u} \times \mathbf{v} = -(\mathbf{v} \times \mathbf{u})
$$

Swapping the order flips the sign and the direction. This means the cross product is **not commutative**, unlike the dot product.

## Python implementation

Let's implement everything from scratch, first in pure Python, then verify with NumPy.
Expand Down
44 changes: 44 additions & 0 deletions content/ai/math/algebra/vectors/index.es.md
Original file line number Diff line number Diff line change
Expand Up @@ -259,6 +259,50 @@ $$
funciona precisamente gracias a esta geometría. Las relaciones semánticas se codifican como *direcciones* en el espacio vectorial, y encontrar `reina` significa hallar el vector con mayor similitud coseno al vector de consulta. Todo modelo de embedding moderno (BERT, GPT, sentence-transformers) hereda esta filosofía geométrica. La siguiente vez que leas sobre ***"espacios de representación"*** en un paper de IA, recuerda: están hablando literalmente de la geometría que acabas de derivar.
{{< /callout >}}

### El producto vectorial (producto cruz)

El producto punto toma dos vectores y devuelve un **escalar**. El **producto vectorial** toma dos vectores en \(\mathbb{R}^3\) y devuelve un **vector**, uno que es perpendicular a ambas entradas. Está definido únicamente en tres (y siete) dimensiones, lo que lo hace más especializado geométricamente que el producto punto.

Dados \(\mathbf{u} = [u_1, u_2, u_3]^T\) y \(\mathbf{v} = [v_1, v_2, v_3]^T\), el producto vectorial \(\mathbf{u} \times \mathbf{v}\) se calcula expandiendo el siguiente determinante simbólico:

$$
\mathbf{u} \times \mathbf{v} = \begin{vmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix}
$$

Expandiendo por la primera fila:

$$
\mathbf{u} \times \mathbf{v} = \mathbf{e}_1(u_2 v_3 - u_3 v_2) - \mathbf{e}_2(u_1 v_3 - u_3 v_1) + \mathbf{e}_3(u_1 v_2 - u_2 v_1)
$$

$$
\boxed{\mathbf{u} \times \mathbf{v} = \begin{bmatrix} u_2 v_3 - u_3 v_2 \\ u_3 v_1 - u_1 v_3 \\ u_1 v_2 - u_2 v_1 \end{bmatrix}}
$$

{{< callout type="info" >}}
En términos sencillos: cada componente del resultado es un determinante \(2 \times 2\) construido a partir de las otras dos componentes de las entradas. El patrón es cíclico: \((2,3)\), \((3,1)\), \((1,2)\).
{{< /callout >}}

Dos propiedades geométricas definen el producto vectorial por completo:
- **Dirección:** \(\mathbf{u} \times \mathbf{v}\) es siempre ortogonal tanto a \(\mathbf{u}\) como a \(\mathbf{v}\). Podés verificarlo directamente: \((\mathbf{u} \times \mathbf{v}) \cdot \mathbf{u} = 0\) y \((\mathbf{u} \times \mathbf{v}) \cdot \mathbf{v} = 0\). La orientación sigue la **regla de la mano derecha**: curvá los dedos de tu mano derecha desde \(\mathbf{u}\) hacia \(\mathbf{v}\), y el pulgar apunta en la dirección de \(\mathbf{u} \times \mathbf{v}\).

**Magnitud:** La longitud del resultado es igual al área del paralelogramo generado por \(\mathbf{u}\) y \(\mathbf{v}\):
$$
\|\mathbf{u} \times \mathbf{v}\| = \|\mathbf{u}\|\|\mathbf{v}\|\sin\theta
$$

{{< callout type="info" >}}
Cuando \(\mathbf{u}\) y \(\mathbf{v}\) son paralelos (\(\theta = 0°\)), el paralelogramo es plano y el producto vectorial es el vector cero. Cuando son perpendiculares (\(\theta = 90°\)), el paralelogramo tiene área máxima y \(\|\mathbf{u} \times \mathbf{v}\|\) se maximiza. Este comportamiento es exactamente opuesto al del producto punto, que se maximiza cuando los vectores son paralelos y vale cero cuando son perpendiculares.
{{< /callout >}}

**Propiedad algebraica clave, anticonmutatividad**:

$$
\mathbf{u} \times \mathbf{v} = -(\mathbf{v} \times \mathbf{u})
$$

Intercambiar el orden invierte el signo y la dirección. Esto significa que el producto vectorial **no es conmutativo**, a diferencia del producto punto.

## Implementación en Python

Implementamos todo desde cero: primero en Python puro para ver la mecánica, luego verificamos con NumPy.
Expand Down