o None (well, that we know of, anyways!)

o Large Files:

  While not a bug in dobackup, the default archiver, afio, does have
  problems writing files greater than 2GiB. It is suggested that if you have
  a need to make very large backups, you split the output files.

  In /etc/dobackup.conf, replace:
  "BackupProg=/usr/bin/afio -o -T 10k -Z -x"
  with:
  "BackupProg=/usr/bin/afio -o -T 10k -Z -x | split -b 2g >"

  This will generate normal backup files split on every 2GiB boundary. See
  the split(1) manpage for more details. The resultant files cannot be
  restored individually, you will need to concatenate them together first.

  From the AFIO docs:

      "Looked into the case of reading/writing archives to/from regular
      filesystem files which are bigger than 2 GB.  Seem to work on newer
      linux systems supporting such large files.  On the recent linux system
      I tried (Red Hat 6.2, kernel 2.4.2, GCC 2.95.3, libc.so.6 ->
      libc-2.1.3.so) a freshly compiled afio can read and write >2GB archive
      files to the filesystem.  HOWEVER I also got reports from others with
      _the same_ or _newer_ versions of stuff that their compiled afio is
      not able to do this.  I have not found a pattern to this: your best
      bet is to recompile the afio executable on your platform and try."
      
  AFIO's internal format (much like cpio) cannot properly handle archived
  files (files in the backup set, not the resultant afio archive) larger
  than 2GiB. Previously, the file would be fully archived, but the wrong
  information stored in the header, making for a formidable challenge to
  restore any such file. AFIO 2.4.7 will now refuse to archive any file that
  is >= 2GiB.
  
  From the AFIO docs:
  
      "No matter how it is compiled under Linux, afio will now issue a
      warning when encountering a >=2GB file in the set to archive, and not
      archive that file. The warning will cause nonzero exit (unless -1
      option changes this)."
