Jack Moore

Email: jack(at)jmoore53.com
Project Updates

Mounting S3 as a filesystem with S3FS

01 May 2019 »

S3FS

This month I spent time working on creating a seamless file transfer system between my development machines and the cloud using an AWS S3 bucket and S3FS. I have a Mac laptop and an Ubuntu based desktop that have important files that often get out of sync with one another. These files exist outside of version control and I was looking for a better way than Google Drive or emails with attachments to transfer them among machines. After doing some quick research, I found S3FS as a way to mount S3 as a filesystem and decided this would be the best tool to use.

Easy Install

The developers of S3FS make it pretty easy to install the tool across unix based platforms and I had no trouble getting it installed on my Mac using:

brew cask install osxfuse
brew install s3fs

and on Ubuntu with:

sudo apt-get install s3fs

Credentials & AWS

The most complicated part of getting the tool up and running is configuring the aws credentials and the S3 bucket.

Creating the bucket was pretty much a snap, but be sure to set the correct settings on the bucket and also give it a DNS compliant name. I named mine itsltns, as I knew it was DNS compliant and it would be easy to remember.

TO MAKE THIS CLEAR: the credentials they are looking for are the Access Key ID and Access Secret Key. These can be found by logging in to AWS and finding the My Security Credentials page. This pair of keys is most commonly used by developers for development of AWS Applications. Make sure these keys or your IAM user has the correct permissions for S3, I gave my IAM user full permissions to read and write.

After generating the key pair they need to go in a file anywhere where you can find them. It was reccommended to put them in ~/.passwd-s3fs using ACCESS_KEY_ID:ACCESS_KEY_SECRET as the format, but I put them with my AWS SSH keys in ~/.ssh/AWS/passwd-s3fs. This file needs to have the permissions of the owner having read and write and group and others having none. Using chmod 600 ~/.ssh/AWS/passwd-s3fs we are able to run the s3fs command and access the S3 Bucket.

Mountpoints

Keeping everything simple and the mountpoint single user owned, I opted to create a mountpoint on both machines within my home directory. I created this mountpoint using mkdir ~/itsltns-s3 and as a little foreshadow, I also ran the command chmod a+rwx ~/itsltns-s3 because I ran into read/write permission issues when reading and writing files from mac-to-ubuntu and ubuntu-to-mac.

With the mountpoint configured, I was up and running using the command:

s3fs itsltns ~/itsltns-s3 -o passwd_file=${HOME}/.ssh/AWS/passwd-s3fs

This allowed me to run a basic ls ~/itsltns-s3 and it returned with nothing in it. To test if the mount was working, I added a test.txt file and sure enough it was copied up to the cloud and the test.txt was in the itsltns s3 bucket.

Issues & Logging

If you run into any issues with mounting or s3fs at all, I would highly reccommend using the logging function with the -o dbglevel=info -f -o curldbg flag at the end. After adding the debug option, the command would look like:

s3fs itsltns ~/itsltns-s3 -o passwd_file=${HOME}/.ssh/AWS/passwd-s3fs -o dbglevel=info -f -o curldbg

MacOS - launchd - mount on user login

One of the features I wanted was automounting the S3 drive on login, so I opted to create a launchd service to run on login.

When working with launchd services don’t bother installing the LaunchControl that is advertised on the launchd website unless you plan on doing serious agent or service development. The application has minimal functionality and requires a license.

Setting up the launchd wasn’t terrible, but I did have to debug the service with launchctl list | grep itsltns a couple times to get the status code of the application.

Without getting into too much into launchd detail, I created the file ~/Library/LaunchAgents/local.itsltns-s3.plist as my launchd service with the contents:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
    <dict>
        <key>Label</key>
        <string>local.itsltns-s3.plist</string>
        <key>Program</key>
        <string>/Users/Jack/Library/LaunchAgents/itsltns-s3.sh</string>
        <key>RunAtLoad</key>
        <true/>
    </dict>
</plist>

If you look in the .plist xml file, you will see the <key> tags surrounding the word “Program” and <string> tags surrounding the location of the shell script to be executed.

With this, I also created the file ~/Library/LaunchAgents/itsltns-s3.sh with the contents:

#!/bin/bash
/usr/local/bin/s3fs itsltns /Users/Jack/itsltns-s3/ -o passwd_file=${HOME}/.ssh/aws/passwd-s3fs -o volname="ITSLTNS - S3"

The itsltns-s3.sh script uses absolute file paths for the s3fs command, location of the mount drive, and the location of the s3fs password file. I didn’t bother adding the path to the script.

The one additional piece you may notice in the above script is the osxfuse option to rename the attached drive with the -o volname="" command which makes the volume look prettier in Finder (replace in-between the quotes with desired drive name).

With the additions and launchd service created, my Mac was setup and ready to go on boot. I got it configured and everything was up to par.

Ubuntu - /etc/fstab - mounting on boot

Configuring Ubuntu to mount the S3 bucket was a little more challenging and I ran into some issues along the way, but it really only took me about 20 minutes, a quick google search, and one reboot.

S3FS offers an example for automounting using the /etc/fstab file, and I ended up with a similiar configuration to their example file. My /etc/fstab file had s3fs#mybucket /path/to/mountpoint fuse _netdev,allow_other 0 0 added on the bottom of it.

I ran sudo mount -a and sure enough the S3FS couldn’t mount because it had no idea where my developer credentials were. The sudo mount -a command spit out a response something like “cannot find access and secret keys.”

/etc/fstab requires root permissions (modified with sudo) and therefore it looks for a different password file, so I used a symbolic link to fix that. Using ln -s /home/jack/Documents/.ssh/aws/passwd-s3fs /etc/passwd-s3fs was the bandaid that prevented me from doing something dumb.

I re-ran sudo mount -a and sure enough it mounted with ease.

I rebooted my machine and all seemed to be in proper order.

Pricing

I haven’t run this for months on end, actually I just got it set up, but it looks like it will be around $2/month with all the requests that are made and the storage pricing. This was based off 16gb/mo and 600,000 total requests.

If I have to, I will revert back to manually mounting when I need to, but for now it is nice to have S3 auto mounted on login (for mac) and boot (for linux).

See AWS Pricing Calculator for more on pricing.

Notes & Todo

  • MacOS attaches attributes and metadata to files and it kinda sucks. File attributes can be removed with xattr -c filename, but I need to make sure every file in the bucket has them stripped. The reason for this is that Ubuntu is unable to read these attributes (different filesystems) and they also bug me.

  • After looking at this post, I need to move keys on both machines to similiar locations. Not fun managing two computers where ~/Documents/.ssh/aws and ~/.ssh/aws have similiar files. They should really be one or the other.

  • I will probably have to revise the s3fs code to allow for multiple s3 mounts as I definitely see it as a way to manage personal and work files between machines and aws accounts. I also might look into autofs with some custom scripts to get this fixed.

  • Permissions among the mountpoints on different machines may need to be looked at, but for now I am the only person using my machines so there is no security threat. knocks on wood

  • I also want to find a place on my Mac for launchd scripts. The ~/Library/LaunchAgents seems like a place where files could easily be placed and lost. Appears to be a place that can get messy quick.

© Jack Moore