Accuracy of confidence intervals

Suppose I have iid observations $X}_{i$ (empirical mean $\stackrel{\u2015}{X}}_{n$), drawn from a distribution with unknown mean $\mu$ and known variance $\sigma}^{2$. To build a confidence interval for $\mu$ I can use the central limit theorem that states:

$\begin{array}{r}\frac{\sqrt{n}({\overline{X}}_{n}-\mu )}{\sigma}\approx \mathcal{N}(0,1)\end{array}$

and get the following approximation (if I am not mistaken), with $\varphi$ being the quantile function of the standard normal distribution:

$K\begin{array}{r}\mathbb{P}(\mu \in [{\overline{X}}_{n}-\frac{\sigma}{\sqrt{n}}{\varphi}_{1-\frac{\alpha}{2}};{\overline{X}}_{n}+\frac{\sigma}{\sqrt{n}}{\varphi}_{1-\frac{\alpha}{2}}])\approx 1-\alpha \end{array}$

I've always been told to just provide this as an answer for an interval with $1-\alpha$ confidence level. But what about the real confidence level? It must be something like $1-\alpha -{\u03f5}_{n}$, right? What about $\u03f5}_{n$?