We can't build something out of nothing. To bootstrap a language, we have to have a medium that is consisted of low level language definition. If this low level language is Turing complete, and we have a valid complete term rewriting system, we can bootstrap anything from that. But not without the startup medium.
If we consider each computation as a simple function step from one value to another, we can see a P vs NP problem solution simply by observing different cases of value extraction.
We define functions by their types and their specific value pairs:
(type of function f)
(value pairs of function f)
We can also combine functions, even recursively like in factorial example:
Readers familiar to functional programming are aware of algorithmic completeness of using first or higher order functions to calculate results.
If we take each function calculation step as a discrete unit of computation, we can imagine the following situations:
- When searching 1:N values we get N computations
- When searching M:N values, we get N computations
- When searching N:1 values, we get just one computation
According to the previous post, we can go backwards, if we want to check the result instead of solving it. From the three enumerated cases, it is obvious that all three combinations are possible: solving can be more complex than checking; solving can be equally complex as checking; solving can be less complex than checking.
According to the observation, I propose the following Venn diagram for P vs NP situation:
Am I missing something?
Assumption: there are two kinds of dualities here involved, and they are diagonally related. When we make a switch from f to f−1, verifying for f becomes solving for f−1, and solving for f becomes verifying for f−1, because it is basically the same algorithm, but in opposite direction.
A similarity between solving and verification can be seen in a simple function f(x)=x2. We get a table:
x or f−1(y)
f(x) or y
For f, we could say:
Solving is happening from left to right: it is printing of all right-hand values, according to known left-hand value. Relatively, solving takes as many steps as there are rows in the table.
Verifying, is happening from right to left: it is finding any left-hand value paired with known right-hand side. Relatively, in the worst case, verification takes as many steps as there are rows in the table.
For f−1, we could say the opposite:
Solving is happening from right to left, and it is analogous (if not the same) process to verifying f
Verifying is happening from left to right, and it is analogous (if not the same) process to solving f
The only difference between solving and verifying is that in solving we print out all the answers, while in verifying we check the least of the answers (possibly all) against expected ones.
Formally, conclusions from above could be written like:
If the above assumptions are correct, and if we say:
we get the following conclusions:
(from 2 and 5) f−1∈/NP
(from 3 and 6) f−1∈P
with no contradiction involved.
However, maybe I made a mistake by considering solving and verifying as dual sides of the same process that switches between when we invert a function.
Sorry for a mess in mail and on forum, something went wrong with the site when I was trying to post this…
You can go from a pair of graphs to a boolean value uniquely, but cannot invert this function, since for both True and False there are multiple corresponding graph pairs.
Of course we can go back! We just have multiple value forms that correspond either to True or False. And more value forms there are, more time complexity there is to extract them.
In my opinion, all functions should be reversible, it is their nature. I think of result, say r simply as a counterpart of parameters a or b. From a, b, and r, we can simply construct a table with columns a, b and r. When we fill the body of the table with specific corresponding values, we get tuples as rows (with possibly repeated values across rows when a single value from one row(s) corresponds to multiple value in other row(s)). Now, we can search tuples by noting either a, either b, either r, or any combination of them, and getting values from other columns, in the same row(s) as we need them. Of course, multiple results could be expected with certain queries, and that is partly a reason for a greater query run time.
Yes, I think we define a problem being in P or NP in the same way: when it solves or verifies a solution in polynomial time, respectively.
The example of printing character / OCR falls down to polynomial time (linear, more precisely), indeed, but printing character is undoubtedly faster than OCR. OCR checks each lookup table element against the input bitmap, while printing checks the lookup table against only a key and then extracts an output bitmap. It is an example of the same function being faster in calculation way, rather than verifying way of running (despite of superficial intuition which says that solving has to be more difficult than verifying).
Noted example of faster solving / slower verifying gives indices that there exist a case of exponential verifying / polynomial solving. To prove this existence, we have to find a function whose domain is exponentially larger or computationally more intensive, regarding to its codomain.
As a proof, let's take a general example of function A that finds a solution in exponential and verifies a result in polynomial time. Now let's introduce its inverted function B, and we have a general case in which solving and verifying swapped the calculating algorithms. Hence, in such a function we have a polynomial solving / exponential verifying case, and I think that now the question is only how far our imagination goes to find a real world example, as in theory things work like a charm with abstract inverse functions, deriving that P⊆NP has to be false.