I am having a problem reconciling the space available on my EBS volume. According to the AWS console the volume is 50GB and is attached to an instance.
If I ssh to this instance and do a df -h, I get the following output:
Filesystem Size Used Avail Use% Mounted on /dev/sda1 15G 13G 3.0G 81% / udev 858M 76K 858M 1% /dev none 858M 0 858M 0% /dev/shm none 858M 72K 858M 1% /var/run none 858M 0 858M 0% /var/lock none 858M 0 858M 0% /lib/init/rw
I am pretty new to AWS. I interpret this as "there is a device attached and it has 15GB capacity. Whats more, you're nearly out of space!"
Can anyone point out the cause of the apparent discrepancy between the space advertised in the console and what is displayed on the instance?
Many thanks in advance
S
Yes, the issue is simple. The volume is only associated with the instance, but not mounted.
Check on the AWS console which drive it is mounted as - most likely /dev/sdf
.
Then (on ubuntu):
sudo mkfs.ext3 /dev/sdf
sudo mkdir /ebs
sudo mount /dev/sdf /ebs
The first line formats the volume - using the ext3
file system type. This is pretty standard -- but depending on your usage (e.g. app server, database server, ...) you could also select another one like ext4
or xfs
.
The second command creates a mount point and the third mounts it into it. This means that effectively, the new volume will be at /ebs
. It should also show up in df
now.
Last but not least, maybe also add an entry to /etc/fstab
to make it reboot-proof.