# Parallelism, products, and automata

October 3, 2013 1 Comment

I have an unreasonable love of alliterations, so I wish I knew a common word for automata that started with P. I even considered calling this post “Paralleilism, Products, Producers”, but feared introducing non-standard nomenclature and confusion into Professor Prakash Panangaden’s class. Hence, the current title; not too bad? If we count “and automata” as an alliteration then I can claim to have introduced an example of parallelism as used in rhetoric right in the title. Unfortunately, the post’s on parallelism in processing; sorry, having too much fun.

Proving that the left half of a regular language is regular was the hardest question on the first assignment. It was also challenge for me as a TA, because I couldn’t find many avenues for hints and advice that didn’t reveal the answer directly. From grading however, I was impressed by the number of students that solved it, but a bit disappointed by those that Googled “half of a regular language” and clicked on the first result like Ben Reichardt’s notes. Since there are solutions already online, I decided that I can give answers, too. Although Prakash provided a full solution set, I thought I would sketch a pedantic treatment of question 5.

One of the best tools from theory of computing is showing that potentially distant seeming languages or even models of computation actually have the same or similar complexity. The regular languages serve as a microcosm to learn and hone these tools. When you first see the boringly serial finite state machines presented, a natural question is: what if I run two DFAs in parallel, is that still a DFA? Well, pedantically no, it doesn’t match the definitions, but in reality — yes, we just have to be more precise.

DFAs are introduced because they recognize (or produce; hence my titular temptation) a regular language. Hence, we should try to relate our machines to languages, if we have two machines with languages then is the language regular? The tempting answer is “yes, run the two machines in parallel”. Since both are finite, running them in parallel seems finite, too. However, we have to build a standard serial DFA to prove regularity, and we can do this with the product construction. Let have states , with start state (where is the start state of machine ), and transition function . Clearly, the machine just treats the two parts of the tuple as independent sub-machines running in parallel, and it isn’t hard to show (by induction on word length) that . This is the product construction (I encourage the reader to check that we could do the same thing with two NFAs to make a product NFA), and leaves us with only the final states to specify. This is where we usually implement our way of combining the two languages. In the case of intersection, the natural logical operation is ‘and’, so we let . If we wanted to implement then we would just use ‘or’: . In a very natural sense, we have shown that the product construction runs two machines in parallel.

For a more involved example, consider the language . If L is regular then is L’ regular? To see that it is, we have to turn a machine M for L into one for L’. The intuitive way to do this, is as we read a word for L’, pretend that instead we are actually reading a word from both ends for L — run the DFA and NFA in parallel, and accept only if they reach the same state. We know this can be done using our product construction above, with final states defined as .

Alternatively, we might have a language where both words are in the forward direction. Consider where is recognized by a machine . Now, we have to guess the middle point and run the second machine from that point. So, for every , define as except with , and as except with . Now, run these two machines in parallel, accepting only if both accept. In other words, for every consider the language . Clearly, if we take the union of these over all then we get .

If we want to explicitly build a machine using these ideas for then we could just use our product construction, or notice acute trick that works because of union. We could define an NFA with , , , and . Notice, that the only non-determinism we have is in the start state where we guess what state to put in the second component of the tuple. Once the second component is set to , it is kept fixed by , and we are basically running the machine for .

How does all of this relate to the ? Well, it is obvious that we can modify the construction for either palindrome^{-1} by changing the labels of all the non-epsilon transitions in to work for every , or with double^{-1} by changing the transitions of the third component to work for any letter; i.e. by making .

Pingback: Cataloging a year of blogging: the algorithmic world | Theory, Evolution, and Games Group