You may have already wondered why you can't use the full disc space on an ext2/3/4 partition? You already seen something like this, where the used and available blocks don't fit the number of blocks on the device/partition:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc1 15391664 38184 14571608 1% /mnt
The reason is: if you create an ext filesystem (no matter if with mkfs or YaST), 5 % of the filesystem get reserved for the super-user to allow e.g. root daemons to continue to write to the partition after unprivileged processes stopped because the partition is full.
While this perfectly make sense on system partitions as '/', it makes no sense on data partitions or external media (in the most cases). By this you get e.g. on an external 2TB/1.8TiB drive 100GByte/~92GiB, which you can't use as normal user. But you can get this space back. Either you set the number of reserved blocks to zero when you create the file system:
mkfs -t ext4 -m 0 /dev/sdc1
or you do it later:
# make sure partition is unmounted
umount /mnt
tune2fs -m 0 /dev/sdc1
On a LUKS encrypted you have to do this:
# on a LUKS device
umount /media/disk
tune2fs -m 0 /dev/mapper/udisks-luks-uuid-X
You should see this output:
tune2fs 1.41.14 (22-Dec-2010)
Setting reserved blocks percentage to 0% (0 blocks)
And if you mount the device again, you can see with df that you have the full space available:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc1 15391664 38184 15353480 1% /mnt