Let X be the set of all sets that do not contain themselves. Is X a member of X?
The question is ill-posed in the standard ZFC (Zermelo-Fraenkel + axiom of Choice) set theory because the object thus defined is not a set.
Since (again, assuming standard ZFC) your class {x : x\not\in x} is not a set, the answer becomes no, it's not an element of itself (even as a class) since only sets can be elements of classes or sets.
By the way, as soon as you agree to the axiom of foundation, no set can be an element of itself.
Of course the nice thing about math is you can choose whichever axioms you want :) but believing in paradoxes is just weird.
In ZFC, either the axiom of foundation [as mentioned] or the axiom (scheme) of comprehension will prohibit this. The first, for obvious reasons; the second, since it basically says that for given z and first-order property P, you can construct { x ∈ z : P(x) }, but to generate the Russell set, you would need z = V (the class of all sets), which is not a set (i.e. cannot be generated from any of the given axioms).
In New Foundations (NF), "x ∉ x" is not a stratified formula, and so again we cannot define the Russell set. Somewhat amusingly, however, V is a set in NF.
In von Neumann--Bernays--Gödel set theory (NBG), the class R = { x : x is a set and x ∉ x } is definable. We then ask whether R ∈ R; if so, then also R ∉ R, giving a contradiction. Thus we must have R ∉ R. But there is no contradiction here, since for any given class A, A ∉ R implies either A ∈ A or A is a proper class. Since R ∉ R, we must simply have that R is a proper class.
Of course, the class R = { x : x ∉ x }, without the restriction, is simply not definable in NBG.
Also of note is that the above procedure is formally constructable as a proof in NBG, whereas in ZFC one has to resort to meta-reasoning.
The most elegant proof I've ever seen resembles Russell's paradox closely.
Theorem (Cantor, I suppose). Let X be a set, and 2^X the set of its subsets. Then card(X) < card(2^X).
Proof. Surely card(X) <= card(2^X), since there is a trivial bijection between X and the singletons in 2^X. We must prove that card(X) != card(2^X).
Suppose there is a bijection between X and 2^X. Then each xk in X is mapped to a set Ak in 2^X.
- x1 ---> A1
- x2 ---> A2
- ...
- xk ---> Ak
- ...
For each xk the chances are: either xk belongs to Ak, or it does not. Let M be the set of all those xk that do not belong to their corresponding set Ak. M is a subset of X, thus there must exist an element m of X which is mapped to M by the bijection.
Does m belong to M? If it does, then it does not, for M is the set of those x that do not belong to the set they're mapped to. If it does not, then it does, for M contains all such x's. This contradiction stems from the assumption that a bijection exists. Thus a bijection cannot exist, the two cardinalities are different, and the theorem is proved.