How do humans learn the syntax and semantics of words from language experience? How does the mind discover abstract relationships between concepts? Computational models of distributional semantics can analyze a corpus to derive representations of word meanings in terms of each word’s relationship to all other words in the corpus. While these models are sensitive to topic (e.g., tiger and stripes) and synonymy (e.g., soar and fly), the models have limited sensitivity to part of speech (e.g., book and shirt are both nouns). By augmenting a holographic model of semantic memory with additional layers of representations, we demonstrate that sensitivity to syntax relies on exploiting higher-order associations between words. Our hierarchical holographic memory model bridges the gap between models of distributional semantics and unsupervised part-of-speech induction algorithms, providing evidence that semantics and syntax exist on a continuum and emerge from a unitary cognitive system.