that would be a lot clearer. i’ve just been burned in the past by notation in analysis.
my two most painful memories are:
in the (baby) rudin textbook, he uses f(x+) to denote the limit of _f _from the right, and f(x-) to denote the limit of f from the left.
in friedman analysis textbook, he writes the direct sum of vector spaces as M + N instead of using the standard notation M ⊕ N. to make matters worse, he uses M ⊕ N to mean M is orthogonal to N.
there’s the usual “null spaces” instead of “kernel” nonsense. ive also seen lots of analysis books use the → symbol to define functions when they really should have been using the ↦ symbol.
unless f(x0 ± δ) is some kind of funky shorthand for the set { f(x) : x ∈ ℝ, | x - x0 | < δ }. in that case, the definition would be “correct”.
it’s much more likely that it’s a typo, but analysts have been known to cook up some pretty bizarre notation from time to time, so it’s not totally out of the question.
i think the ε-δ approach leads to way more cumbersome and long proofs, and it leads to a good amount of separation between the “idea being proved” and the proof itself.
it’s especially rough when you’re chasing around multiple “limit variables” that depend on different things. i still have flashbacks to my second measure theory course where we would spend an entire two hour lecture on one theorem, chasing around ε and η throughout different parts of the proof.
i still feel like this whole ε-δ thing could have been avoided if we had just put more effort into the “infinitesimals” approach, which is a bit more intuitive anyways.
but on the other hand, you need a lot of heavy tools to make infinitesimals work in a rigorous setting, and shortcuts can be nice sometimes
whoa i had no idea the death star was so tiny