# What is known about the role that binary representation of data plays in algorithmic complexity theory?

Question: What is known about the role that binary representation of data plays in algorithmic complexity theory?
I define "analog representation of data" loosely as any method of representing data which is not a sequence of 1's and 0. A more precise definition would be nice to have obviously. The basic idea is that an analog representation of data contains additional information beyond what is contained in a binary representation.
A very trivial example of an "analog" representation of the natural numbers is to label all the primes ${p}_{1},{p}_{2},{p}_{3}\cdots$ and then represent numbers by their prime factorization, instead of in binary form. It is obvious that with this representation of numbers, the problem of factoring large numbers is of polynomial complexity.
Less trivially, according the Shor's Alogirithm, if we represent numbers using an analog physical system (in this case, using quibits instead of bits), then it's possible to factor large numbers in polynomial time as well.
The natural conclusion to draw is that the choice to use binary or not for representing data likely plays an important role in determining if a given problem is in P or NP, and one would think that in order to prove P is not equal to NP it would be required to use the assumption that data is represented in a binary way at some point.
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

dkmc4175fl
"Binary or not" is not the central issue. Well designed data structures can speed up classical computations, but underneath those data structures it's all bits. What quantum computing does is to allow for massively parallel computation, thus problems that may be NP on a classical computer can be P on a quantum computer.
Here's the start of wikipedia's discussion:
Quantum computing studies theoretical computation systems (quantum computers) that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.1 Quantum computers are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data are encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation is analog and uses quantum bits, which can be in an infinite number of superpositions of states.