Systems Outage: June 30 - July 2

More info at l.khoury.to/jun30-outage

Blog from March, 2008

Summer School on Logic Programming and Computational Logic

From Carla Romero, at CRA. She notes that the deadline for student grants is April 15th.

CRA-W/CDC Summer School 
on 
Logic Programming and Computational Logic 

http://www.cs.nmsu.edu/~ipivkina/compulog.htm 


New Mexico State University 
Las Cruces, NM, USA 
July 24-27, 2008 



New Mexico State University is happy to announce the CRA-W/CDC Summer 
School on Logic Programming and Computational Logic. 

The summer school will be held on the campus of New Mexico State 
University in beautiful Las Cruces, New Mexico. 

The summer school is intended to encourage WOMEN GRADUATE STUDENTS, 
POST-DOCTORAL STUDENTS, and YOUNG RESEARCHERS to develop interest and 
expertise in the important areas of constraints, logic programming, 
answer set programming, computational logic and their applications. 

Exceptional undergraduate students in their senior year are also 
encouraged to attend. 

The summer school will include lectures as well as other events aimed 
at providing a complete and fulfilling learning experience. The 
lectures will be given by internationally renowned Researchers who 
have made significant contributions to the advancement of these 
disciplines. 

The summer school is a good opportunity for quickly acquiring 
background knowledge on important areas of computational logic. 


The summer school will consist of six 1/2 day tutorials on the 
following topics: 

* Theoretical Foundations of Logic Programming 
  [Miroslaw Truszczynski, U. of Kentucky] 

* Answer Set Programming 
  [Torsten Schaub, U. of Potsdam] 

* Implementation and Execution Models for Logic Programming 
  [Manuel Hermenegildo, Polytechnic Univ. of Madrid] 

* Logic Programming and Multi-agent Systems 
  [Francesca Toni, Imperial College] 

* Foundations of Constraint and Constraint Logic Programming 
  [TBA] 

* Foundations of Semantic Web and Computational Logic 
  [Sheila McIlraith, University of Toronto] 


There is no charge for participation to the summer school and a 
number of fellowships are available to women participants, that will 
cover all reasonable travel expenses. Applicants are asked to submit 
an application for admission to the summer school composed of the 
following items: 

1. a one page statement of interest, explaining your 
research background and what you expect to gain from 
the summer school 

2. a short (2-page) vitae 

Applications should be submitted in electronic form to: 

   epontell@cs.nmsu.edu and ipivkina@cs.nmsu.edu 

All submissions will be acknowledged with an email. 

If you do not receive acknowledgment within 3 working days, please 
email Enrico Pontelli (epontell@cs.nmsu.edu). 

Lodging will be available at local hotels; we will also provide a 
number of affordable accommodations on the NMSU campus. 


Important dates 
--------------- 

* Requests for student grants: April 15, 2008; 
* Application for Admission: April 25, 2008; 
* Notification of Admission and grants: May 1st, 2008; 
* Summer School: July 24-27, 2008 

Organizers 
---------- 

* Enrico Pontelli, New Mexico State University, USA 
* Inna Pivkina, New Mexico State University, USA 
* Son Cao Tran, New Mexico State University, USA 

D&D, Teaching Mathematics, and Beautiful Code

The big news of last week, at least in certain circles, is the death of Gary Gygax, creator of what was in my youth the hugely popular game Dongeons & Dragons, in the view of many a predecessor and inspiration for the modern computer gaming industry.

Of course, online comics have had a field day with this, with some of the best tributes to Gygax that the medium can deploy. My personal favorite is xkcd's:

A deeper reflection on the history and influence of D&D can be found in Paul La Farge's 2006 essay in The Believer (from Grand Text Auto):

Slightly closer to home, I came across this wonderful essay over the weekend, a critique by Paul Lockhart of current K-12 mathematics education in the US. The bottom line is that we fail miserably to teach mathematics in a way that highlights how interesting it is. For too many students, mathematics is drudgery and symbols manipulation without rhyme or reason. Part of the problem is that culture as a whole does not readily see mathematics as art, something that becomes obvious as soon as one really starts understanding the subject. For instance, most characterizations of proofs as given by mathematicians are couched in term of aesthetic value judgments, by far the biggest hurdle in most standard maths undergrad curricula. (The answers to the question "But what is an acceptable proof?" I asked during my first year at McGill in retrospect shared much in common with answers to the question "but what is a nice painting?".)

In a wonderful happenstance of synchronicity, I also came across the following blogpost by Kathy Sierra over at Creating Passionate Users (dated from almost two years ago, so nothing recent), about aesthetics in program development. It has been my personal belief for a while now that software development is very much like architecture, half technique and half art, and a well-developed aesthetic sense is in fact necessary for writing maintainable and correct code.

Which leads to the obvious question: are we messing up CS education the same way we are messing up Maths education, and for similar reasons?

Algebraic Theories of Quasivarieties

Back from a short holidays in the White Mountains (NH), snoeshowing and cross-country skying. We stayed at a delightful little bed and breakfast near Gorham, the Mt Washington Bed and Breakfast. I highly recommend it. (Tell Mary Ann that Riccardo sent you.)

I hold a logic discussion group, and this semester we are slowly working our way to Plotkin and Power's work on equational characterizations of effects, an alternative to Moggi's monadic approach based on Power's Freyd categories. Perhaps I will get to talk about this later.

Anyways, we started last month by looking at Lawvere theories, Lawvere's 1960s approach to universal algebra via category theory. The idea is pretty simple. An algebraic structure can be defined by a underlying set A along with a set of equations over the elements of the set. This can be represented rather simply using a category who objects are products of A, that is, A^0, A^1, A^2, ..., and where arrows between elements are operations, including all projections. We can encode the equations by stating that some diagrams commute.

Let's look at a simple example, groups. A group is traditionally described as a tuple (G,*,-,e), where G is a set (the elements of the group), * is a binary operation G x G -> G, - is a unary operation G -> G, and e is a constant (or nullary operation) which we can represent as 1 -> G, where 1 is any one-element set, subject to the following equations:

  • x * (y * z) = (x * y) * z
  • x * e = x = e * x
  • x * (-x) = e = (-x) * x

The Lawvere theory for groups, call it LG, is basically the category with object a^0=1,a^1,a^2,... (for some unspecified object a), and taking arrows to include all projections, and three arrows * : a^2 -> a, - : a -> a, and e : 1 -> a, subject to the following diagrams commuting: (it's a pain to draw diagrams in confluence, so I shan't do it...)

  • * o < * , id_a > = * o < id_a , * >
  • * o < e , id_a > = * o < id_a , e >
  • * o < id_a , - > = e o 1
  • * o < - , id_a > = e o 1

(where id_a : a -> a is the identity arrow, o is arrow composition, and <f,g> is the usual product of arrows).

An actual group (i.e., a model for the theory) can be taken to be a functor from the category LG to the category of sets. This functor will map the basic object a of LG to some set G (representing the elements of the group) and the functor properties will ensure that the elements of the set will satisfy the group equations. (Try it!) In other words, a group is a functor LG -> Set.

A good introduction to this is Steve Awodey lecture notes on categorical logic, specifically section 2.1 here:

Now, one question I had at some point is how does this handle algebraic structures defined by implications (that is, Horn clauses). Think of these as conditional equalities. The main example of this that I encounter is Kleene algebras, axiomatized as follows, using Kozen's axiomatization. Recall that a Kleene algebra is an algebra with signature ( K , . , + , 0 , 1 , * ), where ( K , . , + , 0 , 1 ) is an idempotent semiring, and * is a unary operation subject to the following:

  • 1 + x.(x*) <= x*
  • a + (x*).x <= x*
  • if a.x <= x, then (a*).x <= x
  • if x.a <= x, then x.(a*) <= x

(Here, x <= y is defined to be x + y = y, which is a partial order when you have an idempotent semiring.) Note that these equations are conditional, that is, the last two equations are really implications. How do you model this using the framework of Lawvere?

The set of models for an equational theory that uses conditional equations form a quasivariety. (In contrast, the set of models for an equational theory that uses only unconditional equations form a variety; quasivarieties, as the name implies, are almost varieties, but not quite.) The following paper, that appeared in Journal of Algebra 208 (1998), 379-398, seems to hint at the appropriate Lawvere development for quasivarieties, although it is quite technical:

What I am still not sure of, though, is how to concretely give a description of a theory, given a specific equational theory. In particular, given the axiomatization of Kleene algebras given above, can we concretely describe the category LK in the Adamek-Porst framework that plays the role of Lawvere theory, and for which the functors LK -> Set are exactly the Kleene algebras?

Separation Logic

Okay, a couple of people this have been asking me about separation logic, out of the blue, and I am taking it as a sign. This is of interest to me because Andy Gordon and I have been trying to wrap our collective heads around separation logic, a project about which I hope to be able to talk about before long.

So what's the big hoopla? To understand the why of separation logic, you have to first understand the basics of reasoning about programs, Hoare-logic style. Hoare logic is a logical system where the main judgment is of the form {P}C{Q}, where P and Q are first-order logic formulas, and C is a program, generally in some sequential imperative programming language. The interpretation of such a Hoare triple is that when executed in a state satisfying formula P, program C either does not terminate, or terminates in a state satisfying formula Q. Formulas P and Q may refer to program variables. Thus, for instance, the triple {x=0/\y=0}C{res>=0} says that program C, when started in a state where x=y=0, may terminate in a state where res is positive (or it may not terminate). Hoare logic comes with a proof system that lets you derive Hoare triples compositionally from Hoare triples talking about subprograms appearing in C. Thus, for example, if you know {P}C{Q} and {Q}D{R}, then you can derive {P}C;D{R}, where C;D represents the sequential composition of C and D.

This works surprisingly well for simple imperative languages, but does not scale easily to the kind of features found in real programming languages. In particular, programs that manipulate pointers have always been problematic. Dealing with programs that manipulate heap memory itself is not so bad, by simply extending Hoare logic with a heap and allowing formula P and Q in a Hoare triple to refer to values on the heap. The problem is that the heap must be treated as a large unit, while what one really wants to do is to be able to reason locally, taking into account only the part of the heap that a piece of code reads and modifies. Reynolds and O'Hearn, separately, came up with a rather clean way to express this kind of local reasoning by introducing a separation operator into the predicate language, that can be used to say that the heap is separable into two disjoint pieces that do not refer to each other through pointers. Thus, it is possible to reason locally about the heap that a piece of code uses by simply separating out the appropriate portion of the heap, and re-introducing in the postcondition of the code. In other words, if a piece of code C only uses memory cells c1 and c2, then we can use heap separation to separate a heap h into h' * (c1 -> v1, c2 -> v2), have program C act on the local heap (c1 -> v1, c2 -> v2), perhaps producing a new local heap (c1 -> v1', c2 -> v2', c3 -> v3) (here, C has created a new memory cell c3 holding value v3), and the final heap is obtained by reattaching h' to (c1 -> v1', c2 -> v2', c3 -> v3). This works quite nicely, and I will refer you to the following papers for the history of those early days:

Recently, separation logic has been applied to the problem of reasoning about heap-manipulating concurrent programs. New challenges emerge in this context, such as race conditions. It turns out that separation logic has something to say about managing resources (the heap can be viewed as a resource, so this is maybe not very surprising), and it has been used to reason about lock-free programs, that is, programs that attempt to avoid race conditions without overly using locks, if at all. Unfortunately, there appears to be subtle issues with the rules of the logic when concurrency is added to the mix.

A good overview of separation logic for concurrent programs is O'Hearn's survey paper:

O'Hearn credits Steve Brookes with the solution to the problems of separation logic for concurrent programs:

I hope the above reading material makes for a reasonable starting point.