# Given two binary strings of length n, the brute-force method for determining if they are equivalent is just iteratively cycling one of them n times and comparing to see if the other string is ever obtained. Is there a faster algorithm? Perhaps there is a list of invariants that are fast to compute and uniquely determine the necklace? Or perhaps through a bijection with some other set, like the irreducible polynomials over F_2 with degree dividing n, it is possible to more efficiently solve the problem and transfer back?

Determining when two binary strings represent the same necklace or when one binary string is periodic
An equivalence relation on binary strings calls two strings equivalent if one can be obtained from the other by a cyclic permutation of the characters. Combinatorialists call the equivalence classes of such strings "necklaces".
Given two binary strings of length n, the brute-force method for determining if they are equivalent is just iteratively cycling one of them n times and comparing to see if the other string is ever obtained. Is there a faster algorithm? Perhaps there is a list of invariants that are fast to compute and uniquely determine the necklace? Or perhaps through a bijection with some other set, like the irreducible polynomials over ${\mathbb{F}}_{2}$ with degree dividing n, it is possible to more efficiently solve the problem and transfer back?
A related question: is there an algorithm better than iterative cycling and comparison to get a computer to recognize if a string is periodic? That is, is there a fast algorithm to compute the stabilizer subgroup of a string under the action of the cyclic group of order n?
You can still ask an expert for help

• Questions are typically answered in as fast as 30 minutes

Solve your problem for the price of one coffee

• Math expert for every subject
• Pay only if we can solve it

Step 1
If you really wanted to you could get worst-case O(nlogn) using a fast fourier transform, at least when n is a power of 2.
I know nothing about such things, but I doubt that this could be worth the trouble, because it seems to me that the brute-force approach is going to be O(n) on average. Because you can compare two random strings of arbitrary length in constant average time:
Step 2
The probability you get a "no" after looking at the first character is 1/2. The probability that you have to look at the first and then the second characters to get a "no" is 1/4. And so on; the expected value of the number of character comparisons needed is no larger than
$\sum _{k=1}^{\mathrm{\infty }}k{2}^{-k}<\mathrm{\infty }.$