When considering two-sample data that involves a difference of proportions, both a confidence interval and a hypothesis test can be done.

The standard deviation used for a difference of proportions in creating a confidence interval is $\sqrt{\frac{{p}_{1}(1-{p}_{1})}{{n}_{1}}+\frac{{p}_{2}(1-{p}_{2})}{{n}_{2}}}$

However, the standard deviation used for confidence intervals is $\sqrt{\frac{p(1-p)}{{n}_{1}}+\frac{p(1-p)}{{n}_{2}}}$, where $p=\frac{{x}_{1}+{x}_{2}}{{n}_{1}+{n}_{2}}$, ${x}_{1}={p}_{1}{n}_{1}$, and ${x}_{2}={p}_{2}{n}_{2}$

What I don't understand is why these are different. They're both the standard deviation of the same proportion, so why should they differ?