教程 #备份 #Cloudflare
rclone备份文件至cloudflare的r2 - 开发调优 - LINUX DO
安装rclone
sudo -v ; curl https://rclone.org/install.sh | sudo bash
一定要从这里下载,apt-get 的最新版本依然不支持 R2
配置rclone
配置文件
cd ~/.config/rclone
正常会有一个 rclone.conf,没有就新建,,编辑它
[wordpress_backup] type = s3 provider = Cloudflare access_key_id = 你的access_key_id secret_access_key = 你的secret_access_key region = auto endpoint = https://你的id.r2.cloudflarestorage.com
命令行
rclone config
[
image.png1342×195 41.1 KB
](https://linux.do/uploads/default/original/3X/7/4/748ab918e89048b6ad170655f3aa638422d390d2.webp “image.png”)
n
填写名字
比如我这里计划是备份 wordpress,所以我填 wordpress_backup
这时会弹出一堆选择,
以下是各项服务及其对应的说明的表格(由 GPT 生成并修改):
|编号|服务名称|说明|
|—|—|—|
|1|1Fichier|一个文件托管服务,用于存储和分享文件。|
|2|Alias for an existing remote|为已有的远程连接创建别名,方便管理。|
|3|Amazon Drive|亚马逊云端存储服务,已停止接受上传。|
|4|Amazon S3 Compliant Storage Provider|Amazon S3兼容的存储提供商,包括AWS、阿里云、Ceph,包括cloudflare的R2|
|5|Backblaze B2|一种经济实惠的云存储解决方案,提供对象存储服务。|
|6|Box|提供在线文件存储和协作功能的云存储服务。|
|7|Cache a remote|缓存一个远程连接,加快访问速度。|
|8|Citrix Sharefile|企业级文件存储和分享服务,适合商业用途。|
|9|Dropbox|一种流行的云存储服务,允许文件同步和分享。|
|10|Encrypt/Decrypt a remote|为远程存储加密或解密文件,提高安全性。|
|11|FTP Connection|通过FTP协议连接远程服务器,用于文件传输。|
|12|Google Cloud Storage|谷歌云存储服务,主要用于企业级的云端存储(非Google Drive)。|
|13|Google Drive|谷歌的个人和团队文件存储与共享服务。|
|14|Google Photos|谷歌的照片存储与管理服务。|
|15|Hubic|法国Orange公司提供的云存储服务。|
|16|In memory object storage system|基于内存的对象存储系统,用于高速数据访问。|
|17|Jottacloud|挪威的云存储服务,注重隐私保护。|
|18|Koofr|允许整合多个云存储账户的服务,也提供自己的存储空间。|
|19|Local Disk|连接到本地磁盘或本地存储设备。|
|20|Mail.ru Cloud|俄罗斯的Mail.ru提供的云存储服务。|
|21|Microsoft Azure Blob Storage|微软Azure平台的对象存储服务,支持大量数据存储。|
|22|Microsoft OneDrive|微软的个人云存储服务,集成于Windows和Office应用中。|
|23|OpenDrive|提供无限存储空间的云存储服务。|
|24|OpenStack Swift|OpenStack平台的对象存储服务,支持大规模云存储。|
|25|Pcloud|提供安全、易用的云存储服务,支持加密和文件共享。|
|26|Put.io|在线文件存储和下载服务,支持直接从种子文件下载内容。|
|27|SSH/SFTP Connection|使用SSH/SFTP协议连接远程服务器,用于安全文件传输。|
|28|Sugarsync|提供文件同步和备份功能的云存储服务。|
|29|Transparently chunk/split large files|将大文件透明地分块或分割,以便于存储或传输。|
|30|Union merges the contents of several upstream fs|将多个上游文件系统的内容合并为一个视图。|
|31|Webdav|基于HTTP的WebDAV协议,用于远程文件管理和传输。|
|32|Yandex Disk|俄罗斯Yandex提供的云存储服务。|
|33|http Connection|通过HTTP协议连接,适用于网页上的文件访问和传输。|
|34|premiumize.me|提供下载和文件存储服务的多合一平台,支持从多个来源下载。|
|35|seafile|一种开源的云存储服务,注重数据同步和协作功能。|
我们为了使用 cloudflare 的 R2,选择 4,然后你可以看到 Cloudflare,输入 6,但你仍热可以输入 Cloudflare,进入下一步
[
image.png1383×836 142 KB
](https://linux.do/uploads/default/original/3X/b/2/b2fffdbed117dd94d9c4b884303f2941a2181401.webp “image.png”)
下一步的意思是,从环境变量获取密钥还是输入,我们选择输入,也就是false,或者直接回车
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default (“false”).
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ “false”
2 / Get AWS credentials from the environment (env vars or IAM)
\ “true”
然后输入指定的密钥和 id
[
image.png1474×398 44.7 KB
](https://linux.do/uploads/default/original/3X/4/e/4e0127ac2d3db552ac491caa5f07299443e1503d.webp “image.png”)
下一步是选择区域region,填写auto就行
再下一步是endpoint,这个填写密钥界面的
[
image.png1687×347 29.4 KB
](https://linux.do/uploads/default/original/3X/2/2/22a84d1e886982f1dd8a6ed8a2463021d298b3ab.webp “image.png”)
下一步是选择权限,回车就行
然后再下一步是高级设置,直接回车,然后确认信息保存,然后输入q退出就行,下面是一段完整的配置命令
root@ecspNjn:~/bash# rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> wordpress_backup Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / 1Fichier \ "fichier" 2 / Alias for an existing remote \ "alias" 3 / Amazon Drive \ "amazon cloud drive" 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc) \ "s3" 5 / Backblaze B2 \ "b2" 6 / Box \ "box" 7 / Cache a remote \ "cache" 8 / Citrix Sharefile \ "sharefile" 9 / Dropbox \ "dropbox" 10 / Encrypt/Decrypt a remote \ "crypt" 11 / FTP Connection \ "ftp" 12 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" 13 / Google Drive \ "drive" 14 / Google Photos \ "google photos" 15 / Hubic \ "hubic" 16 / In memory object storage system. \ "memory" 17 / Jottacloud \ "jottacloud" 18 / Koofr \ "koofr" 19 / Local Disk \ "local" 20 / Mail.ru Cloud \ "mailru" 21 / Microsoft Azure Blob Storage \ "azureblob" 22 / Microsoft OneDrive \ "onedrive" 23 / OpenDrive \ "opendrive" 24 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" 25 / Pcloud \ "pcloud" 26 / Put.io \ "putio" 27 / SSH/SFTP Connection \ "sftp" 28 / Sugarsync \ "sugarsync" 29 / Transparently chunk/split large files \ "chunker" 30 / Union merges the contents of several upstream fs \ "union" 31 / Webdav \ "webdav" 32 / Yandex Disk \ "yandex" 33 / http Connection \ "http" 34 / premiumize.me \ "premiumizeme" 35 / seafile \ "seafile" Storage> 4 ** See help for s3 backend at: https://rclone.org/s3/ ** Choose your S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun \ "Alibaba" 3 / Ceph Object Storage \ "Ceph" 4 / Digital Ocean Spaces \ "DigitalOcean" 5 / Dreamhost DreamObjects \ "Dreamhost" 6 / IBM COS S3 \ "IBMCOS" 7 / Minio Object Storage \ "Minio" 8 / Netease Object Storage (NOS) \ "Netease" 9 / Scaleway Object Storage \ "Scaleway" 10 / StackPath Object Storage \ "StackPath" 11 / Tencent Cloud Object Storage (COS) \ "TencentCOS" 12 / Wasabi Object Storage \ "Wasabi" 13 / Any other S3 compatible provider \ "Other" provider> cloudflare Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value 1 / Enter AWS credentials in the next step \ "false" 2 / Get AWS credentials from the environment (env vars or IAM) \ "true" env_auth> AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> access_key_id AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> secret_access_key Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Use this if unsure. Will use v4 signatures and an empty region. \ "" 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. \ "other-v2-signature" region> auto Endpoint for S3 API. Required when using an S3 clone. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value endpoint> https://你的id.r2.cloudflarestorage.com Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. Enter a string value. Press Enter for the default (""). location_constraint> Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Owner gets FULL_CONTROL. No one else has access rights (default). \ "private" 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. 3 | Granting this on a bucket is generally not recommended. \ "public-read-write" 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. \ "authenticated-read" / Object owner gets FULL_CONTROL. Bucket owner gets READ access. 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-read" / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-full-control" acl> private Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config -------------------- [wordpress_backup] provider = cloudflare access_key_id = access_key_id secret_access_key = secret_access_key region = auto endpoint = https://你的id.r2.cloudflarestorage.com acl = private -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== wordpress_backup s3 e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q
编写定时脚本
上传脚本
这个脚本实现了运行自动压缩,并上传至远程,超过最大限制之后删除远端文件
下面这个脚本需要修改几个地方,SOURCE_DIR 是你要备份的文件夹,RCLONE_REMOTE 是你 R2 的地址,格式是 {配置名}: 桶名 / 目录
#!/bin/bash # 需要备份的文件夹路径 SOURCE_DIR="/data/compose/2/" # 获取脚本所在目录路径 SCRIPT_DIR=$(dirname "$(readlink -f "$0")") # 备份文件临时存放路径 TEMP_DIR="$SCRIPT_DIR/temp" # rclone配置名称和目标路径 RCLONE_REMOTE="wordpress_backup:wordpress-backup/blog" # 日志文件路径 LOG_FILE="$SCRIPT_DIR/backup.log" # 保留的最大备份数量 MAX_BACKUPS=2 # 当前时间,用于备份文件命名 DATE=$(date +"%Y-%m-%d_%H-%M-%S") # 创建临时目录(如果不存在) mkdir -p "$TEMP_DIR" # 压缩并生成日志 ARCHIVE_NAME="${TEMP_DIR}/$(basename "$SOURCE_DIR")-${DATE}.tar.gz" echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Compression started" >> "$LOG_FILE" tar -czf "$ARCHIVE_NAME" "$SOURCE_DIR" >> /dev/null 2>&1 echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Compression completed" >> "$LOG_FILE" # 删除云端最早的备份(超过最大保留数量) EXISTING_BACKUPS=$(rclone lsf "$RCLONE_REMOTE" | grep "$(basename "$SOURCE_DIR")-.*\.tar\.gz" | sort) NUM_BACKUPS=$(echo "$EXISTING_BACKUPS" | wc -l) if [ "$NUM_BACKUPS" -gt "$MAX_BACKUPS" ]; then NUM_TO_DELETE=$((NUM_BACKUPS - MAX_BACKUPS)) BACKUPS_TO_DELETE=$(echo "$EXISTING_BACKUPS" | head -n "$NUM_TO_DELETE") for BACKUP in $BACKUPS_TO_DELETE; do echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Deleting old backup: $BACKUP" >> "$LOG_FILE" rclone delete "$RCLONE_REMOTE/$BACKUP" >> "$LOG_FILE" 2>&1 done fi # 上传新的备份 echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Starting upload of $ARCHIVE_NAME to $RCLONE_REMOTE" >> "$LOG_FILE" echo "本次运行的命令为:rclone copy \"$ARCHIVE_NAME\" \"$RCLONE_REMOTE/\" --log-file=\"$LOG_FILE\" --log-level INFO –s3-no-check-bucket" >> "$LOG_FILE" rclone copy "$ARCHIVE_NAME" "$RCLONE_REMOTE/" --s3-no-check-bucket --log-file="$LOG_FILE" --log-level INFO UPLOAD_STATUS=$? # 检查上传是否成功 if [ $UPLOAD_STATUS -eq 0 ]; then echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Upload of $ARCHIVE_NAME completed successfully" >> "$LOG_FILE" rm "$ARCHIVE_NAME" # 上传成功后删除本地文件 else echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Failed to upload $ARCHIVE_NAME" >> "$LOG_FILE" fi # 记录备份过程完成状态 if [ $UPLOAD_STATUS -eq 0 ]; then echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Backup process completed successfully" >> "$LOG_FILE" else echo "[$(date +"%Y-%m-%d_%H-%M-%S")] Backup process failed" >> "$LOG_FILE" fi
我这里将他放在 /root/bash/backup.sh 这个位置
定时设置
打开定时配置
crontab -e
会弹出选择编辑器的,选择一个喜欢的
[
image.png1149×274 35.3 KB
](https://linux.do/uploads/default/original/3X/e/c/ec891a161129fc79e5b48aacc40542fc31f2ef87.webp “image.png”)
写入以下内容
0 2 * * * /root/bash/backup.sh
保存即可
评论区