The GL state for the bindings is one texture name per target (i.e. 1D/2D/3D/cube). So when calling
glBindTexture(GL_TEXTURE_2D, my2dTex)
glBindTexture(GL_TEXTURE_1D, my1dTex)
the GL will remember both settings.
Now, the answer of which one GL will use depends on whether you have a shader on.
If a shader is on, the GL will use whatever the shader says to use. (based on sampler1d/sampler2d...).
If no shader is on, then it first depends on which glEnable call has been made.
glEnable(GL_TEXTURE_2D)
glEnable(GL_TEXTURE_1D)
If both are enabled, there is a static priority rule in the spec (3.8.15 Texture Application in the GL 1.5 spec).
Cube > 3D > 2D > 1D
So in your case, if both your texture targets are enabled, the 2D one will be used.
As a side note, notice how a shader does not care whether or not the texture target is Enabled...
Edit to add:
And for the people who really want to get to the gritty details, you always have a texture bound for each target * each unit. The name 0 (the default for the binding state) corresponds to a series of texture objects, one per target. glBindTexture(GL_TEXTURE_2D, 0)
and glBindTexture(GL_TEXTURE_1D, 0)
both bind a texture, but not the same one...
This is historical, specified to match the behavior of GL 1.0, where texture objects did not exist yet. I am not sure what the deprecation in GL3.0 did with this, though.