This article is currently unfinished.
In college, I attended a talk by a fellow student about semigroup theory, which concluded (IIRC) with proving the isomorphism theorems for semigroups. One point brought up earlier was an analogue of Cayley's theorem, which states that every group can be emebedded into the permutation group of some set. The analogous statement for semigroups is that every semigroup can be embedded in the semigroup of endofunctions of some set. This ties into a similar pattern for other mathematical structures, some examples of which will be listed here:
Of course, there is another striking pattern: all of the examples given above have some inherent notion of associativity. Now, the Lie bracket of a Lie algebra is not associative, but given that Lie algebras arose from Lie groups, there is still a link with associativity. The case of posets can also be included, if one interprets posets as being skeletal thin categories. What one could conclude from this is that any algebraic structure representing actually interesting phenomena must have some notion of associativity.
Now, after the talk, I went up to this student and asked: if groups are permutations on sets, and semigroups are functions on sets, what are quasigroups? I knew that the study of quasigroups had some practical applications regarding Sudoku, so they must be related to something concrete, right? The response I got was somewhat disappointing: full non-associativity is too combinatoric to be "algebraic" in any interesting way. In retrospect, this makes sense, given that "non-associative" objects, in practice, tend to satisfy weaker conditions, such as alternativity. Yet, this disappointment still lingered in my mind, leaving me wondering why being meaningfully "algebraic" must imply associativity in some way. It was later that I realized the reason: associativity is intricately tied with (perhaps identical to) the concatenation of objects.
There is a kind of algebraic structure that deals essentially with concatenation: the theory of monoids. Of course, on the one hand, this designation seems trivial, as we could take any binary operation on some set and declare that it "concatenates" objects. However, monoids in particular have a special relationship to concatenation as it is understood in the usual sense of "adjoining" two objects together. The canonical forgetful functor from Mon to Set that takes every monoid to its underlying set has a left adjoint, creating the free monoids. But, remember that the free monoid generated by some set is the monoid whose elements are finite-length strings, with elements of this set as the "alphabet." In this sense, the binary operation on this free monoid quite literally is the concatenation of strings, so using this term is actually meaningful. We can then say that more general monoids are strings over some alphabet where we have decided to identify certain strings with each other. What this should indicate is that monoids have, in a particular sense, "linguistic" content, where "linguistic" brings to mind formal languages. This connection to formal languages is further entrenched by monoids (and semigroups) being used in automata theory and theoretical computer science, where they model specific types of abstract computers, called "automata," that are used as tools for "recognizing" certain classes of formal languages. The key take-away from this is that monoids should be considered as linguistic objects.
Now, this covers the topic of monoids, but notably, it only covers the case of monoids in Set with the Cartesian product as the monoidal product. When working in category theory, one quickly comes across the notion of monoid objects residing in more general contexts. In this case, we have a given category C, which is equipped with a functor ⊗ from C×C to C satisfying some nice properties. Again, I note that these "nice properties" include a somewhat weakened form of associativity, weakened enough to not violate the principle of equivalence. Because we also give C an element I, called the identity of ⊗, we can think of C itself as a categorified version of a monoid. However, once we have this structure in place, we can also define the notion of monoid objects within C. These are similar to standard monoids, but the binary operation is from m⊗m to m and the identity is from I to m. While this sounds needlessly abstract, some familiar examples of algebraic structure now show up as monoids objects internal to monoidal categories. For example, if we equip the category of Abelian groups with the standard tensor product, then "monoids in Ab" are actually none other than rings. Generalizing this, the category of left R-modules, with R any ring, has the category of left R-algebras as its category of monoids. Because right R-modules are equivalent to left Rop-modules, this correspondence can also be extended to them.
There is one other aspect that can be generalized from monoids over Set to more general monoidal categories: the notion of free monoids. Unlike in Set, not every monoidal category is guaranteed to contain free monoids for every object within it. However, sometimes, we can guarantee their existence. Particularly, if C has countable coproducts, and both sides of ⊗ preserve them, then this holds. Then, the underlying object of the free monoid generated by an object a is the coproduct of a's nth tensor powers, over all n. If this description does not immediately seem familiar, remember that the set of strings over an alphabet is the sum over its nth Cartesian powers. What this means is that the notion of "the set of strings over an alphabet" is instantly generalized to this broader class of monoidal categories. We will also extend the notion of general monoidal objects being "objects of strings with certain identifications made" to these other categories. In fact, for the sake of convenience, this picture will be referenced even when the monoidal category does not necessarily have all free monoid objects.
Now, if the notion of "strings over an alphabet" can be retained, how much of Mon's "linguistic" content can we recover? There is clearly some vestige of it, but the general case clearly seems "deformed" when compared to the case in Set. This choice of words on my part is no accident. There is a very particular sense in which these monoids are deformed version of monoids over Set. In order to understand this relationship, we must venture into the territory of the almost-mythical "field with one element," F1. While the object itself is still elusive (though it seems ever-closer as the years pass), its intended relationship with finite fields is known. The intuition is that the finite field Fq is a "deformation" of F1, bringing to mind quantum deformations. I claim that it is in this sense that general monoid objects are "deformations" of standard monoids. In fact, one of the earlier attempts to construct "algebras over F1" concluded that (multiplicative) monoids might be appropriate.
While monoids have been shown to have links with formal languages, more general monoid objects do not have an obvious link with this area of study. However, with the case of rings, there is a known link with another subject matter: rings are often used to encode data about "spaces" in a broad sense. There are two classic cases of this, which then form the prototypes for other examples of this general construction:
In both of these cases, these correspondences led to attempts at associating broader classes of these objects with some notion of space. For instance, every commutative C*-algebra represents a locally compact space, so perhaps noncommutative ones represent "noncommutative spaces." In the case of varieties, it was realized that the abstract description of a variety via its regular functions could be applied to any commutative ring. This eventually led to the current notion of a scheme, in which a space is locally described by duals of commutative rings. Blending these two ideas together, there have also been attempts at generalizing the class of schemes to include "noncommutative schemes."
This trend of associating algebraic objects to geometric objects in a dual manner eventually produced what is known as Isbell duality. It was William Lawvere who determined that, given a category of "test spaces," one could construct both general "spaces" and "quantities" from them. A space X would be a presheaf on the test spaces, assigning to each test space U the set of all maps from U to X. A quantity A would be a copresheaf on the test spaces, assigning to each test space U the set of all maps from A to U. The content of the Yoneda lemma provides this idea with a "consistency result," showing that a space is determined by how one can map into it. There is a good chance that some of you might accept the notion that X is therefore a "space," yet are confused at how A is a "quantity." The intuition is that A is a "U-valued function," and since the usual notion of a function on a point is a quantity, then so is A.
Perhaps the most illustrative example of this would be the case on the site CartSp of Cartesian spaces, i.e. ℝn for some n. These objects are clearly spaces, so saying that a space X is defined by how each ℝn can map into it makes sense. Here, the notion that A is a "quantity" is also intuitve, because it is now an ℝn-valued function, like what is taught in school. Hopefully, the intiution from both cases of this example carry over to the more general definitions of space and quantity.
By the way, did you notice that I said that CartSp consisted of Cartesian spaces, but never said exactly what kind of maps we were using? I could have been implying continuous maps, smooth maps, linear maps, affine maps, or perhaps something else, yet I took no care to mention which ones. That's because this idea applies to all of those notions of CartSp. The point was to use the intuition of "real-valued functions."
Extending from the two previous examples involving C*-algebras and rings of regular functions, Isbell duality can also explain other correspondences. For instance, it explains the Stone duality, originally a correspondence between Stone spaces and Boolean algebras, but since extended to other examples. It also allows one to embed smooth manifolds into the category of formal duals of ℝ-algebras, using their rings of smooth ℝ-valued functions. This embedding shows that certain ℝ-algebras, called "C∞-rings," represent an enlargement of the category of smooth manifolds, called smooth loci.
Returning to the quest to unravel F1, another attempt at defining "F1-algebras" involved generalizing commutative rings. It was the insight of Nikolai Durov (better known as one of the co-founders of both VK, formerly VKontakte, and Telegram) that led to this generalization. He realized that one could identify a commutative ring with the theory of commutative algebras over that ring. This is because there is a direct correspondence between ring homomorphisms and extension of scalars between the categories of algebras over those rings. Because the theory of commutative algebras over some commutative ring is encoded by a monad, Durov decided that arbitrary monads were of interest. Specifically, "generalized rings (after Durov)" are defined as commutative finitary monads over Set. CRing embeds nicely into this category. Durov proposes that F1 be defined as the monad representing the theory of pointed sets. If we take the Durov-style view, perhaps monads themselves represent exactly the kinds of "spaces" that we would classically consider to be "spaces."
But wait, what is a monad on Set, again? It's nothing more than a monoid object in the category of endofunctors, with composition as the product. Once again, monoids appear! Actually, just saying that monads are a particular type of monoid is true, but a priori takes a certain viewpoint. Given a monoidal category, a monoid object in this category is also a monad on its delooping bicategory. Therefore, there is a single underlying concept of both monoids and monads, with two different viewpoints on which is primary. This underlying concept can ultimately be described as concatenation, which is either the source of associativity or, maybe, the same thing as associativity. With monoids, concatenation comes from concatenating strings, while with monads, concatenation comes from concatenating endofunctors. This means, by the way, that there is even a notion of associativity underlying the theory of quasigroups, as it is also described by a particular monad.
Given that I was hyping up the "linguistic" nature of monoids, how is this at all linked with the concept of monads? There are two possible views of this connection, and I am not quite sure which one is more appropriate. The first possibility: The immediate quality of both algebraic and geometric objects is that they are objects equipped with some sort of structure. Therefore, perhaps the general concept of "structure" controls these objects in the same way that grammar controls languages. The second possibility: Perhaps monads describe algebraic theories because they literally describe algebraic theories, in a lingustic sense.
Given how much I have been describing algebra and geometry as dual to each other, some of you might be thinking that, perhaps, geometry is coalgebraic. This isn't an unreasonable assumption to make, but if I am interpreting things correctly, geometry and coalgebra are dual to algebra in different manners. Geometry is dual to algebra in the sense that categories of algebraic objects are dual to categories of geometric objects. However, coalgebra is dual to algebra in the sense that coalgebra is dual to the very concept of algebra itself.
Here's an example that should illustrate this: Durov sees certain types of monads on Set as being generalizations of rings. Remember that the typical examples of what are classically considered to be "spaces" are usually described by certain types of rings. From this, we can claim that the category of monads on Set is dual to the category of what are classically considered to be "spaces." However, the category of comonads on Set, which control coalgebraic phenomena, is not the same as the dual of the category of monads. In one instance, we defined a category and then took its dual, while in the other instance, we defined a category with the dual definition. Perhaps this is not immediately obvious, but they are not the same methods of defining categories.
By the way, given that I was hyping up the link between linguistics and structure, perhaps there is a link between "colingustics" and dynamics. Unfortunately, this avenue is somewhat disappointing. Remember that the linguistics connection came from the link between monoids and strings. However, because Set is Cartesian closed, any set is a comonoid in a unique way: one equips M with the diagonal map to M2. Therefore, "colinguistics" technically exists, but only in the most trivial manner. Although, maybe this is to be expected. Given that coalgebra is ultimately the theory of dynamics, the triviality of "colinguistics" might just be because bare sets lack nontrivial dynamics.
This article is currently unfinished.Prev | Home Page | Next | Latest