Defragmentation does not preserve extent sharing, e.g. files created by cp --reflink or existing on multiple snapshots. Due to that the data space consumption may increase.
For example, you would have to defragment your filesystem again with btrfs filesystem defragment -r -v -czstd /. Where zstd is an algorithm and /, a root path. With this command, the default compression level will be used, which is level 3.
Be careful, defragmenting the btrfs file system will/can duplicate the data.
As for a mount point, if you decided to use zstd algorithm with level 1 compression, just add the compress=zstd:1 or compress-force=zstd:1 to the mount options (fstab or while mounting manually)
The support for RAID56 is in development and will eventually fix the problems with the current implementation. This is a backward incompatible feature and has to be enabled at mkfs time.
It is fine. You can use the duperemove tool (or bees) to find and remove duplicates.
https://btrfs.readthedocs.io/en/latest/Deduplication.html
So it is out-of-band deduplication and has to be done manually.
Also, by default cp and most file managers use a reflink copy (data blocks are copied only when modified)