IvanV

  • User since
  • Last active
  • Started 1 topic
  • Posted 10 times

Recent activity

Posted in An Algoloid that uses macros to define itself

We can't build something out of nothing. To bootstrap a language, we have to have a medium that is consisted of low level language definition. If this low level language is Turing complete, and we have a valid complete term rewriting system, we can bootstrap anything from that. But not without the startup medium.

Posted in P = NP consequences

If we consider each computation as a simple function step from one value to another, we can see a P vs NP problem solution simply by observing different cases of value extraction.

We define functions by their types and their specific value pairs:

(type of function ff)
f:intintf: int \rightarrow int

(value pairs of function ff)
f:12f: 1 \rightarrow 2
f:23f: 2 \rightarrow 3
f:34f: 3 \rightarrow 4
......

We can also combine functions, even recursively like in factorial example:

fact:intintfact: int \rightarrow int
fact:x{xfact(x1)if x>01otherwisefact: x \rightarrow \begin{cases} x * fact (x- 1) & \text{if}\ x > 0 \\ 1 & \text{otherwise} \end{cases}

Readers familiar to functional programming are aware of algorithmic completeness of using first or higher order functions to calculate results.

If we take each function calculation step as a discrete unit of computation, we can imagine the following situations:

  • When searching 1:N values we get N computations
  • When searching M:N values, we get N computations
  • When searching N:1 values, we get just one computation

According to the previous post, we can go backwards, if we want to check the result instead of solving it. From the three enumerated cases, it is obvious that all three combinations are possible: solving can be more complex than checking; solving can be equally complex as checking; solving can be less complex than checking.

According to the observation, I propose the following Venn diagram for P vs NP situation:

PNP

Am I missing something?

Posted in P = NP consequences

Assumption: there are two kinds of dualities here involved, and they are diagonally related. When we make a switch from ff to f1f^{-1}, verifying for ff becomes solving for f1f^{-1}, and solving for ff becomes verifying for f1f^{-1}, because it is basically the same algorithm, but in opposite direction.

A similarity between solving and verification can be seen in a simple function f(x)=x2f(x) = x^2. We get a table:

xx or f1(y)f^{-1}(y) f(x)f(x) or yy
-2 4
-1 1
0 0
1 1
2 4

For ff, we could say:

  • Solving is happening from left to right: it is printing of all right-hand values, according to known left-hand value. Relatively, solving takes as many steps as there are rows in the table.

  • Verifying, is happening from right to left: it is finding any left-hand value paired with known right-hand side. Relatively, in the worst case, verification takes as many steps as there are rows in the table.

For f1f^{-1}, we could say the opposite:

  • Solving is happening from right to left, and it is analogous (if not the same) process to verifying ff

  • Verifying is happening from left to right, and it is analogous (if not the same) process to solving ff

The only difference between solving and verifying is that in solving we print out all the answers, while in verifying we check the least of the answers (possibly all) against expected ones.

Formally, conclusions from above could be written like:


(1) (fP)(f1NP)(f \in P) \leftrightarrow (f^{-1} \in NP)
(2) (fP)(f1NP)(f \notin P) \leftrightarrow (f^{-1} \notin NP)
(3) (fNP)(f1P)(f \in NP) \leftrightarrow (f^{-1} \in P)
(4) (fNP)(f1P)(f \notin NP) \leftrightarrow (f^{-1} \notin P)


If the above assumptions are correct, and if we say:
(5) fPf \notin P
(6) fNPf \in NP

we get the following conclusions:
(from 2 and 5) f1NPf^{-1} \notin NP
(from 3 and 6) f1Pf^{-1} \in P

with no contradiction involved.


However, maybe I made a mistake by considering solving and verifying as dual sides of the same process that switches between when we invert a function.

Posted in P = NP consequences

Sorry for a mess in mail and on forum, something went wrong with the site when I was trying to post this…

You can go from a pair of graphs to a boolean value uniquely, but cannot invert this function, since for both True and False there are multiple corresponding graph pairs.

Of course we can go back! We just have multiple value forms that correspond either to True or False. And more value forms there are, more time complexity there is to extract them.

In my opinion, all functions should be reversible, it is their nature. I think of result, say rr simply as a counterpart of parameters aa or bb. From aa, bb, and rr, we can simply construct a table with columns aa, bb and rr. When we fill the body of the table with specific corresponding values, we get tuples as rows (with possibly repeated values across rows when a single value from one row(s) corresponds to multiple value in other row(s)). Now, we can search tuples by noting either aa, either bb, either rr, or any combination of them, and getting values from other columns, in the same row(s) as we need them. Of course, multiple results could be expected with certain queries, and that is partly a reason for a greater query run time.

Posted in P = NP consequences

Yes, I think we define a problem being in P or NP in the same way: when it solves or verifies a solution in polynomial time, respectively.

The example of printing character / OCR falls down to polynomial time (linear, more precisely), indeed, but printing character is undoubtedly faster than OCR. OCR checks each lookup table element against the input bitmap, while printing checks the lookup table against only a key and then extracts an output bitmap. It is an example of the same function being faster in calculation way, rather than verifying way of running (despite of superficial intuition which says that solving has to be more difficult than verifying).

Noted example of faster solving / slower verifying gives indices that there exist a case of exponential verifying / polynomial solving. To prove this existence, we have to find a function whose domain is exponentially larger or computationally more intensive, regarding to its codomain.

As a proof, let's take a general example of function AA that finds a solution in exponential and verifies a result in polynomial time. Now let's introduce its inverted function BB, and we have a general case in which solving and verifying swapped the calculating algorithms. Hence, in such a function we have a polynomial solving / exponential verifying case, and I think that now the question is only how far our imagination goes to find a real world example, as in theory things work like a charm with abstract inverse functions, deriving that PNPP \subseteq NP has to be false.