Google Cloud Console
Before we get to the script needed to setup automatic backups for photos and/or screenshots, we need to set up our Google Cloud API for Google Drive. It's best to follow the instructions in this video to follow along for those steps.
Just a reminder, this process involves generating client secrets and authorization tokens, it's very important to keep these safe.
In the video, we visit a URL in our browser to get our Authorization code. Below is the URL. Replace with your values where applicable. You're going to need the Authorization code for a later command.
https://accounts.google.com/o/oauth2/auth?client_id=<your-client-id>&redirect_uri=urn:ietf:wg:oauth:2.0:oob&scope=https://www.googleapis.com/auth/drive.file&response_type=code
Also, we use a command to get our refresh token. Below is the command; be sure to replace all applicable values. Copy the entire block into your Terminal.
curl -X POST https://oauth2.googleapis.com/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "code=AUTHORIZATION_CODE" \
-d "client_id=YOUR_CLIENT_ID" \
-d "client_secret=YOUR_CLIENT_SECRET" \
-d "redirect_uri=urn:ietf:wg:oauth:2.0:oob" \
-d "grant_type=authorization_code"
After getting the refresh token, and the rest of the keys from the video, we should be set for the script portion.
Bash Script
Below is the script to automatically backup PNG files to Google Drive.
#!/bin/bash
WATCH_DIR="/home/phablet/Pictures/Screenshots"
CLIENT_ID="your-client-id"
CLIENT_SECRET="your-client-secret"
REFRESH_TOKEN="your-refresh-token"
# Get new access token from refresh token using wget
get_access_token() {
wget -qO- \
--post-data="client_id=$CLIENT_ID&client_secret=$CLIENT_SECRET&refresh_token=$REFRESH_TOKEN&grant_type=refresh_token" \
https://oauth2.googleapis.com/token \
| grep -oP '(?<="access_token": ")[^"]*'
}
# Upload file to Google Drive using wget
upload_to_drive() {
local file_path="$1"
local file_name=$(basename "$file_path")
local access_token=$(get_access_token)
echo "Uploading $file_name to Google Drive..."
# Create metadata JSON
local metadata=$(printf '{"name": "%s"}' "$file_name")
# Create a temporary request body (multipart)
local boundary="BOUNDARY-$(date +%s)"
local temp_file=$(mktemp)
{
echo "--$boundary"
echo "Content-Type: application/json; charset=UTF-8"
echo
echo "$metadata"
echo "--$boundary"
echo "Content-Type: image/png"
echo
cat "$file_path"
echo
echo "--$boundary--"
} > "$temp_file"
wget -q --header="Authorization: Bearer $access_token" \
--header="Content-Type: multipart/related; boundary=$boundary" \
--post-file="$temp_file" \
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart" -O /dev/null
rm "$temp_file"
# Rename file to mark as uploaded
local dir_name=$(dirname "$file_path")
local base_name="${file_name%.*}"
local extension="${file_name##*.}"
local new_name="${base_name}-uploaded.${extension}"
mv "$file_path" "$dir_name/$new_name"
echo "Done. Renamed to $new_name"
}
# Simple watcher using polling
echo "Watching $WATCH_DIR for new files..."
declare -A seen_files
while true; do
for file in "$WATCH_DIR"/*.{png,jpg,jpeg}; do
[ -e "$file" ] || continue
# Skip files that have already been uploaded
file_name=$(basename "$file")
if [[ "$file_name" == *"-uploaded."* ]]; then
continue
fi
if [ -z "${seen_files[$file]}" ]; then
seen_files["$file"]=1
upload_to_drive "$file"
fi
done
sleep 10 # check every 10 seconds
done
Here are a couple of important points for the script.
- The
WATCH_DIR
is the specified directory that this script will monitor. Any PNG files in this directory will be uploaded to Google Drive. - After we've uploaded a PNG file to Google Drive, we append
-uploaded
to the end of the file. This is so future instances of our script knows that this file has already been uploaded. Otherwise, we'll upload all files in the WATCH_DIR
every time the script is run. - With the current implementation of the script, I'm backing up my screenshots on my Ubuntu Touch device. You can change this to your pictures directory, or any other directory you feel is necessary.