CMU ECE '16. Hobbyist interest in PL.
Hi, all. I'm a hardware engineer working in high-speed ethernet. I took a lambda calculus course in college (with the wonderful Professor Statman of CMU) and have been a hobbyist fan of programming language design ever since. I have very little other background in PL, other than that I sometimes write parsers during my day job.
It's okay. I've certainly had worse mishaps with automated emails. :-)
I believe that I see an issue:
Let be a function which is computable in superpolynomial time, but verifiable in polynomial time. We define the inverse of this function, to be the function such that . Verification that , for some set , involves computing for each . Computing each takes polynomial time, and the process of verification is linear in the number of elements in , and so verification for is computable in polynomial time.
But we need verification for to happen in superpolynomial time, otherwise cannot be outside of by definition. If is in , then our claim that but does not hold. This would mean that we do not have evidence of a function that is in but not in , which is central to the claim that .
As a proof, let's take a general example of function that finds a solution in exponential and verifies a result in polynomial time. Now let's introduce its inverted function , and we have a general case in which solving and verifying swapped the calculating algorithms.
I'm not convinced that such functions necessarily have to exist. Take, for example, the graph isomorphism problem, which is classically understood to be in . To use the structure of this discussion, we'd say that we have a function which has a domain of the set of pairs of graphs, and codomain True, False. You can go from a pair of graphs to a boolean value uniquely, but cannot invert this function, since for both True and False there are multiple corresponding graph pairs.
Obviously, that there exists a single problem for which you cannot construct an inverse for the function doesn't imply that no function is reversible, but I'm having a hard time coming up with any function that has a domain and range of different size and is also reversible. Perhaps you could propose one, or at least a (possibly informal, possibly nonconstructive) proof that such functions should exist?
Firstly: I am definitely interested in your vs expository PDF.
The definition which I use, at least informally, is that a problem is in if there exists an algorithm which solves that problem in a number of steps which is related to the "size" of the problem instance by a polynomial function. A problem is in if there exists an algorithm which takes a solution to that problem and either verifies or disproves the correctness of that solution in polynomial time. (Where by polynomial time I mean the same as above.)
By this definition, the function of printing character bitmaps is in by building a lookup table from input character to output bitmap. Running through this lookup table to find the appropriate bitmap to return takes at most on the order of steps, where is the number of characters in the lookup table. If the number of characters is fixed at the definition of the function, then the runtime of this algorithm is at most on the order of . Since is a polynomial, the problem is in .
The function of converting a bitmap back into a character is similar, albeit with a larger lookup table. The runtimes of these functions will therefore be fairly different; I would use the character bitmap function without reservation, but would probably try to find a more efficient implementation of the bitmap character one. Despite their differences in runtime, however, they have the same asymptotic complexity, and both reside in the complexity class .
Perhaps you could elaborate on how you are defining and ?
I believe that what you are describing is the difference in algorithms analysis between work and span . In brief, work is a measure of the total number of operations performed during a computation, while span is the length of the longest dependency chain between operations. I last talked about this as a freshman in a freshman algorithms class, so I can't point you to any literature (at least, not quickly), but you might be able to find references to those names.
The corollary about vs. doesn't seem very interesting to me, since all it claims is that the amount of time to solve certain problems grows slowly (or not at all) when you scale your computational resources to match the size of the problem. Perhaps I'm missing something of significance?
 See, for example, the definitions given in the overview section of https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms