We will be looking at composition algebras that satisfy certain weaker laws of associativity. These algebras can then be used for representations of generalizations of groups. We will then see how one can create other algebras satisfying the same identities to use as modular representations. This includes both commutative algebras and non-commutative algebras.

The Frobenius complexity of a local ring R measures asymptotically the abundance of Frobenius actions of order e on the injective hull of the residue field of R. It is known that, for Stanley-Reisner rings, the Frobenius complexity is either -infinity or 0. However, a complete description of the Frobenius complexity sequence c_ e(R)for all values of e is not yet known when R is Stanley-Reisner. We will provide the answer for a large class of Stanley-Reisner rings, generalizing work of Alvarez Montaner, Boix and Zarzuela.

Two seemingly unrelated combinatorial objects and actions are increasing tableaux with K-promotion (arising in K-theory of the Grassmannian), and rowmotion on plane partitions (arising in Lie theory). In this talk, we will give some background to understand these combinatorial objects and actions, describe the connection between these two things (joint w/ J. Striker and O. Pechenik), and then discuss how the results can be extended to the more general setting of increasing labelings of any partially ordered set (joint w/ J. Striker and C. Vorland).

Every electrical power system can be modeled by a graph G whose vertices represent buses and whose edges represent power lines. A phasor measurement unit (PMU) is a device that can be placed at a bus to observe the voltage at that bus as well as the current and its phase through all incident power lines. The problem of monitoring the entire electric power system using the fewest number of PMUs is closely related to the well-known vertex covering and dominating set problems in graph theory.

In this talk, we will give an overview of the PMU placement problem and its connections to commutative ring theory. We define the edge ideal I^P_G of a graph G vertices in a polynomial ring R=k[X_1, . . . , X_n] and we describe some algebraic properties of the quotient R/I^P_G. In particular, we will show that, when we restrict to trees, the Cohen- Macaulay property is implied by the unmixed property and implies the complete intersection property. We will also give examples to show that for non-trees, these implications can fail.

In 2017, Nasseh and Sather-Wagstaff proved that if M and N are finitely generated modules over a non-trivial fiber product R such that Tor^i_R(M,N)=0 for i»0, then M or N has finite projective dimension. Their proof uses Moore’s explicit free resolution of the second syzygy of M. Recently, Avramov, Iyengar, Nasseh, and Sather-Wagstaff have proved that the same conclusion holds for other classes of rings using differential graded algebra techniques. Thus, it is natural to ask whether one can explicitly describe DG algebra resolutions of fiber products. We will present progress on this question.

For A→B a flat homomorphism of commutative noetherian rings and M an injective A-module, we can calculate the injective dimension of M⊗_AB as a B-module by calculating the injective dimensions of the fibers F(p)=B_p/pB_p for each p∈Ass(M) by the formula id_B(M⊗B) = sup_{p∈Ass(M)} id_{F(p)}F(p) (Foxby, 1975). This formula can be used to recover the fact that injective modules remain injective under certain flat base changes, such as a localization. We work towards a generalization of Foxby’s Theorem to calculate the Gorenstein injective dimension of the base change of a Gorenstein injective module.

I will explain how sums of squares in the gluing algebra of partially labelled graphs can be used to prove graph density inequalities in extremal combinatorics. This is the essence of semidefinite method of Lovasz and Szegedy, and also Cauchy-Schwartz calculus of Razborov. I will introduce the sums of squares method, do several examples of sums of squares proofs, and then present simple explicit inequalities that show strong limitations of the sums of squares method. No background in combinatorics is necessary. This is joint work with Annie Raymond, Mohit Singh and Rekha Thomas.

Blockchain technology is now going through explosive development with the aim to develop a new generation of revolutionary financial technology. The most successful example is new digital currency bitcoin. The fundamental building block in blochchain technology is actually cryptographic algorithms, which is why bistcoin is actually called a cryptocurrency. The main cryptographic algorithms used in blockchain technology are hash functions and

elliptic curve digital signatures. As we all know, quantum computers are not such a significant threat to the security of Hash functions but it can be fatal to the elliptic curve digital signatures. In this presentation, we will first show how the quantum computers can threat the security of blockchain technology, in particular, why the existing blockchain technology used in bitcoins can not fundamentally avoid such a practical attack. Then we will explain the challenges we will face if if we just plug in existing post-quantum cryptographic solutions as a drop-in to replace the existing elliptic curve signatures, in particular, the key size problem and a few others. In the end, we will present some of the new solutions we have been developing to deal with these fundamental problems including a new type of proof of work algorithms, which, we believe, provide very viable solutions for the future long term secure blockchain technology.

Iwan Duursma — University of Illinois at Urbana-Champaign

(Joint ADM/RTG Seminar)

Storage solutions for storing data on a single disk or a combination of several disks do not perform well at the scale of data centers that employ thousands of servers. Over the last decade two new families of error-correcting codes have been introduced that better address the requirements for large scale distributed storage of data. We discuss the main parameters for these families and present a variety of algebraic and graph theoretic methods that can be used to construct codes with good trade offs among the main parameters.

It has been observed since the early work of Richard Stanley that several well-known permutation statistics are “compatible” with the operation of shuffling permutations. In joint work with Ira Gessel, we formalize this notion of a shuffle-compatible permutation statistic and develop a unifying framework for studying shuffle-compatibility, which has close connections to the theory of P-partitions, quasisymmetric functions, and noncommutative symmetric functions. In this talk, I will survey the main results of our work as well as several new directions of research concerning shuffle-compatibility.

Every electric power system can be modeled by a graph G whose vertices represent electrical buses and whose edges represent power lines. A phasor measurement unit (PMU) is a monitor that can be placed at a bus to observe the voltage at that bus as well as the current and its phase through all incident power lines. The problem of monitoring the entire electric power system using the fewest number of PMUs is closely related to vertex covering and dominating set problems in graph theory.

In this talk, we will give an overview of the PMU placement problem and its connections to commutative ring theory. By defining the power edge idealI^{P}_{G}of a graph G, we will show how to use graphs of electric power grids to generate polynomial rings with desired algebraic properties. In particular, we will classify the trees G for which I^{P}_{G}is Cohen-Macaulay and prove that every such ideal is also a complete intersection.

This project is joint work with Alan Hahn, Frank Moore, and Sean Sather-Wagstaff.

Integer Programming (or Integer Optimization) is the problem if optimizing an objective function over a set of feasible points that have the restriction that some variables must take integer values. Integer linear programming (optimizing a linear function over linear constraints and integer constraints) is NP-Hard, while the continuous counterpart, Linear Programming, is polynomial time solvable. Since the linear case is NP-Hard, the non-linear case of integer programming is also NP-Hard. The story becomes much different when the dimension is considered a fixed number. For instance, what is the complexity of integer linear programming when there are only 3 variables? Lenstra proved in the 80’s that in fact, for any fixed dimension, integer linear programming can be solved in polynomial time. This result hinges on the lattice basis reduction algorithms such as the famous LLL algorithm. Surprisingly, the question of complexity in fixed dimension is very unclear even for quadratic integer programming. We will survey Lenstra-type algorithms for integer programming and show recent results on convex and non-convex mixed integer programming.