Quick-start - Containers
Note
Please replace
4.4.0-1.c1with the desired version in the following steps.
prerequisites
A RedHat 9 or Rocky Linux 9 system with Docker or Podman installed.
To get started with StorageFabric, use the following steps:
Create Buckets and Access Keys with Storage Providers
Create buckets where your StorageFabric configuration data will be stored. With StorageFabric, all data (including configuration) is stored with providers. This makes StorageFabric components stateless, resulting in less management overhead for users and enabling quick disaster recovery.
1 Create a configuration bucket and a data bucket with a storage provider.
Amazon
Follow the Amazon S3 tutorial to create your Configuration Bucket in your desired AWS region. Note down the bucket name. We will use it later to setup StorageFabric.
Follow the Google Cloud Storage tutorial to create your Configuration Bucket. Note down the bucket name. We will use it later to setup StorageFabric.
Azure
Follow the Microsoft Azure Create a storage account tutorial to create your Azure Cloud Storage Account.
Follow this Microsoft Azure tutorial to create an Azure container. This will be your Configuration Bucket. For Public access level, select the default level Private (no anonymous access). Note down the container name. We will use it later to setup StorageFabric.
2 Get your storage provider access credentials.
Create or obtain credentials for your storage providers so that StorageFabric can communicate with them.
Amazon
Follow the Amazon Security Credentials tutorial
to create an Access Key ID and Secret Access Key with
read-write access to your Configuration Bucket.
Follow the Google Key Management tutorial
to create an Access Key ID and Secret Access Key with
read-write access to your Configuration Bucket.
Azure
Take the following steps to obtain an Access Key ID and a corresponding Secret Access Key for Azure:
Your storage account name will serve as an Access Key ID.
Follow the Microsoft Azure tutorial to obtain keys for your Azure containers.
The storage account name will serve as the Access Key ID
The key value (from either key1 or key2) will serve as the Secret Access Key
Next Step: Install and Setup StorageFabric
Install and Setup StorageFabric
1 Register with the Virtalica software and downloads portal
To gain access to StorageFabric documentation and software, register at https://content.virtalica.com.
2 Download and install the StorageFabric software
Choose to either download and install the rpm, or to download and extract the tar file.
RPM
2a Download the StorageFabric rpm
curl -s -S -O -u USERNAME URL
In the above commands, replace USERNAME with your registered user name with https://content.virtalica.com. Replace URL based on your selected distribution:
RedHat 9.x
https://repos.virtalica.com/fabric/enterprise/rhel/9/x86_64/storagefabric-4.4.0-1.c1.x86_64.rpm
Rocky 9.x
https://repos.virtalica.com/fabric/enterprise/rocky/9/x86_64/storagefabric-4.4.0-1.c1.x86_64.rpm
Enter your password when prompted to download the software.
2b Import StorageFabric RPM-GPG key
Download the repository key using the below command based on your distribution:
RedHat
curl "https://repos.virtalica.com/fabric/enterprise/rhel/RPM-GPG-KEY-StorageFabric" \
-u USERNAME > /etc/pki/rpm-gpg/RPM-GPG-KEY-StorageFabric
Rocky
curl "https://repos.virtalica.com/fabric/enterprise/rocky/RPM-GPG-KEY-StorageFabric" \
-u USERNAME > /etc/pki/rpm-gpg/RPM-GPG-KEY-StorageFabric
In the above commands, replace USERNAME with your registered user name.
You will be prompted to enter the password.
Import the repository key.
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-StorageFabric
2c Install the StorageFabric rpm
The below command will create a directory /opt/storagefabric-4.4.0-1.c1/ which contains the files necessary for running StorageFabric containers.
rpm -i storagefabric-4.4.0-1.c1.x86_64.rpm
TAR
2a Download the StorageFabric tar
First, set your working directory to where you’d like the tar file stored. Then, download the tar file by using the below command.
curl -O -u USERNAME https://repos.virtalica.com/fabric/enterprise/files/storagefabric-4.4.0-1.c1.tar.gz
In the above commands, replace USERNAME with your registered user name.
You will be prompted to enter the password.
The -O flag will result in the downloaded tar file having the same name as on the remote server, which is storagefabric-4.4.0-1.c1.tar.gz
2b Create directory
A new directory needs to be created, which will be used in the next step to hold the extracted tar contents.
sudo mkdir -p /opt/storagefabric-4.4.0-1.c1
2c Extract tar
While in the directory with the tar downloaded in the first step, run the below command to extract its contents to the /opt/storagefabric-4.4.0-1.c1 directory.
tar -xvzf storagefabric-4.4.0-1.c1.tar.gz -C /opt/storagefabric-4.4.0-1.c1
The /opt/storagefabric-4.4.0-1.c1 directory now contains everything needed to start running containers.
Note
After this point, all steps will be the same, regardless of which option (rpm or tar) you selected.
3 Import the StorageFabric container image
Run the below command to make the image available to your container manager.
Docker
docker load < /opt/storagefabric-4.4.0-1.c1/storagefabric-docker-image-4.4.0-1.c1.tar.gz
Podman
podman load < /opt/storagefabric-4.4.0-1.c1/storagefabric-docker-image-4.4.0-1.c1.tar.gz
4 Create volumes
Although you can use bind mounts, we recommend using volumes instead. Volumes are more portable, allow the use of alternative storage drivers (such as NFS), and do not require uid/gid mapping.
Create volumes for StorageFabric containers as shown below.
Note
If your use case requires uid/gid, please set uid=997 and gid=995.
Docker Volumes
Configuration Manager & Gateway
docker volume create storagefabric-logs
docker volume create storagefabric-conf
# Update the size to match your needs
docker volume create storagefabric-cm-memstore \
--opt type=tmpfs --opt device=tmpfs --opt o=size=256m
docker volume create storagefabric-gw-memstore \
--opt type=tmpfs --opt device=tmpfs --opt o=size=256m
Configuration Manager only
docker volume create storagefabric-logs
docker volume create storagefabric-conf
# Update the size to match your needs
docker volume create storagefabric-memstore \
--opt type=tmpfs --opt device=tmpfs --opt o=size=256m
Gateway only
docker volume create storagefabric-logs
docker volume create storagefabric-conf
# Update the size to match your needs
docker volume create storagefabric-memstore \
--opt type=tmpfs --opt device=tmpfs --opt o=size=256m
Podman Volumes
Note
If using Podman on RHEL, be advised that the size option is only supported on
file systems that were mounted with prjquota enabled.
If an error is encountered, see xfs_quota(8) man page.
Configuration Manager & Gateway
podman volume create storagefabric-logs
podman volume create storagefabric-conf
# Update the size to match your needs
podman volume create storagefabric-cm-memstore \
--opt type=tmpfs --opt device=tmpfs --opt o=size=256m
podman volume create storagefabric-gw-memstore \
--opt type=tmpfs --opt device=tmpfs --opt o=size=256m
Configuration Manager only
podman volume create storagefabric-logs
podman volume create storagefabric-conf
# Update the size to match your needs
podman volume create storagefabric-memstore \
--opt type=tmpfs --opt device=tmpfs --opt o=size=256m
Gateway only
podman volume create storagefabric-logs
podman volume create storagefabric-conf
# Update the size to match your needs
podman volume create storagefabric-memstore \
--opt type=tmpfs --opt device=tmpfs --opt o=size=256m
5 Set up the StorageFabric configuration
Enable which StorageFabric components are running within a container by modifying
the file /opt/storagefabric-4.4.0-1.c1/storagefabric/storagefabric on the host.
For example, to enable only the StorageFabric Gateway, use:
STORAGEFABRIC_CONFIGURATION_MANAGER_ENABLED=false
STORAGEFABRIC_GATEWAY_ENABLED=true
Then, change your StorageFabric configuration by editing the configuration files on the host. Note that some minimum configuration options must be set to run StorageFabric.
Note
The configuration file supports the Ansible format. Sensitive items can also be stored encrypted using the Ansible vault feature. For details, see the full product documentation.
Moreover, these files are required when starting up a StorageFabric container. Once container has started, these files can be removed from the host system.
Configuration Manager
Minimum configuration options that need to be set in /opt/storagefabric-4.4.0-1.c1/storagefabric/configuration_manager.yml:
################################# SYNC CONFIGURATION #################################
storagefabric_cm_sync:
#***
#* StorageFabric Master Encryption Key.
#*
#* To generate a new Master Encryption Key, use one of the following:
#*
#* * storagefabric-keygen --master-encryption-key
#*
#* * echo $(openssl rand -hex 32)$(printf %X $(date +%s)) | tr '[:lower:]' '[:upper:]'
#*
#* * Generate using your enterprise KMS.
master_encryption_key: ""
configuration:
#***
#* Base url for the provider's s3 endpoint. E.g., s3.amazonaws.com
provider_url: ""
#***
#* API type can be one of s3, s3_v4, azure.
#* provider_region must be specified if API type is s3_v4.
provider_api_type: ""
#***
#* Bucket where StorageFabric configuration data is stored.
bucket: ""
#***
#* Access Key id. Used by StorageFabric to access the configuration bucket.
access_key_id: ""
#***
#* Secret Access Key corresponding to .
secret_access_key: ""
storagefabric_cm_configuration:
views:
#***
#* View name
- name: default
#***
#* Use one of:
#*
#* - provide a known View encryption key
#*
#* - generate a new one with one of:
#*
#* * storagefabric-keygen --view-encryption-key
#*
#* * echo $(openssl rand -hex 32)$(printf %X $(date +%s)) | tr '[:lower:]' '[:upper:]'
encryption_key: ""
users:
#***
#* Define users for the StorageFabric WEB UI.
- name: admin
password: ""
#################################### LICENSE ################################
#***
#* Entire license in PEM format. The license will be saved in the file.
#* /etc/storagefabric/licenses/license_ansible.pem.
#* The license contents pasted below should be formatted such that
#* it starts with a space, then a pipe (vertical line), with your
#* license contents pasted directly below that. Each line of the license
#* should be prepended by two spaces. Example:
#* storagefabric_license: |
#* -----BEGIN CERTIFICATE-----
#* MIOPkTCCA3mgAgIBAgIUKvJ+4taG07SCKEYdOJjJRj0/khkwDQYJKoZIhvcNAQEL
#* BQAwVzELMAkGs1UEBhMCVVMxCzAJBgNVBAgMAk5ZMRcwFQMNAQQKDA5WaXJ0YWxp
#* ...
storagefabric_license: |
############################# OTHER WEB SETTINGS ###############################
# To generate the following keys, use the command: openssl rand -hex 50
storagefabric_cm_django_web_secret_key: ""
storagefabric_cm_django_system_web_secret_key: ""
Gateway
Minimum configuration options that need to be set in /opt/storagefabric-4.4.0-1.c1/storagefabric/gateway.yml:
################################# SYNC CONFIGURATION #################################
storagefabric_gw_sync:
view:
#***
#* View name
name: default
#***
#* Use one of:
#*
#* - provide a known View encryption key
#*
#* - generate a new one with one of:
#*
#* * storagefabric-keygen --view-encryption-key
#*
#* * echo $(openssl rand -hex 32)$(printf %X $(date +%s)) | tr '[:lower:]' '[:upper:]'
encryption_key: ""
configuration:
#***
#* Base url for the provider's s3 endpoint. E.g., s3.amazonaws.com
provider_url: ""
#***
#* API type can be one of s3, s3_v4, azure.
#* provider_region must be specified if API type is s3_v4.
provider_api_type: ""
#***
#* Bucket where StorageFabric configuration data is stored.
bucket: ""
#***
#* Access Key id. Used by StorageFabric to access the configuration bucket.
access_key_id: ""
#***
#* Secret Access Key corresponding to .
secret_access_key: ""
Note
To reconfigure a running container (without restart), see the full product documentation.
Note
For detailed help and a complete list of all configuration options, refer to the StorageFabric Ansible roles documentation:
6 Run the StorageFabric Container
Run StorageFabric using Docker/Podman commands or as a systemd service.
Systemd
Setup StorageFabric as a systemd service as described in the full product documentation.
Then, simply run using:
systemctl start storagefabric
Alternatively, use the underlying Docker/Podman commands directly.
Docker Run
Configuration Manager & Gateway
docker run -d --network host \
--name "storagefabric" \
-v storagefabric-logs:/var/log/storagefabric:Z \
-v storagefabric-conf:/etc/storagefabric:Z \
-v storagefabric-cm-memstore:/etc/storagefabric/configuration-manager/memstore:Z \
-v storagefabric-gw-memstore:/etc/storagefabric/gateway/memstore:Z \
-v /opt/storagefabric-4.4.0-1.c1/storagefabric:/storagefabric:ro \
--cap-add CAP_NET_ADMIN \
--ulimit nofile=17000:17000 \
virtalica/storagefabric:4.4.0-1.c1
Configuration Manager only
docker run -d --network host \
--name "storagefabric" \
-v storagefabric-logs:/var/log/storagefabric:Z \
-v storagefabric-conf:/etc/storagefabric:Z \
-v storagefabric-memstore:/etc/storagefabric/configuration-manager/memstore:Z \
-v /opt/storagefabric-4.4.0-1.c1/storagefabric:/storagefabric:ro \
--ulimit nofile=17000:17000 \
virtalica/storagefabric:4.4.0-1.c1
Gateway only
docker run -d --network host \
--name "storagefabric" \
-v storagefabric-logs:/var/log/storagefabric:Z \
-v storagefabric-conf:/etc/storagefabric:Z \
-v storagefabric-memstore:/etc/storagefabric/gateway/memstore:Z \
-v /opt/storagefabric-4.4.0-1.c1/storagefabric:/storagefabric:ro \
--cap-add CAP_NET_ADMIN \
--ulimit nofile=17000:17000 \
virtalica/storagefabric:4.4.0-1.c1
Podman Run
Configuration Manager & Gateway
podman run -d --network host \
--name "storagefabric" \
-v storagefabric-logs:/var/log/storagefabric:Z \
-v storagefabric-conf:/etc/storagefabric:Z \
-v storagefabric-cm-memstore:/etc/storagefabric/configuration-manager/memstore:Z \
-v storagefabric-gw-memstore:/etc/storagefabric/gateway/memstore:Z \
-v /opt/storagefabric-4.4.0-1.c1/storagefabric:/storagefabric:ro \
--cap-add CAP_NET_ADMIN \
--cap-add=CAP_AUDIT_WRITE \
--ulimit nofile=17000:17000 \
localhost/virtalica/storagefabric:4.4.0-1.c1
Configuration Manager only
podman run -d --network host \
--name "storagefabric" \
-v storagefabric-logs:/var/log/storagefabric:Z \
-v storagefabric-conf:/etc/storagefabric:Z \
-v storagefabric-memstore:/etc/storagefabric/configuration-manager/memstore:Z \
-v /opt/storagefabric-4.4.0-1.c1/storagefabric:/storagefabric:ro \
--cap-add=CAP_AUDIT_WRITE \
--ulimit nofile=17000:17000 \
localhost/virtalica/storagefabric:4.4.0-1.c1
Gateway only
podman run -d --network host \
--name "storagefabric" \
-v storagefabric-logs:/var/log/storagefabric:Z \
-v storagefabric-conf:/etc/storagefabric:Z \
-v storagefabric-memstore:/etc/storagefabric/gateway/memstore:Z \
-v /opt/storagefabric-4.4.0-1.c1/storagefabric:/storagefabric:ro \
--cap-add CAP_NET_ADMIN \
--cap-add=CAP_AUDIT_WRITE \
--ulimit nofile=17000:17000 \
localhost/virtalica/storagefabric:4.4.0-1.c1
Note
For optimal network performance, running containers in host network mode is strongly recommended. For more details, refer to the Docker or Podman documentation:
Note
The capability CAP_NET_ADMIN is required for StorageFabric’s the full product documentation. Certain gateway services in the container will fail to start without this capability.
The capability CAP_AUDIT_WRITE is required on RedHat for Audit system logging to function properly.
The option --ulimit nofile=17000:17000 specifies the hard and soft limit of open files that are consistent with StorageFabric default settings.
Note
Kernel-level settings for container should be configured on the host. For more information, see Configuring kernel parameters at runtime.
fs.inotify.max_user_instances
Edit
/etc/sysctl.confand add the following line:fs.inotify.max_user_instances=256.Execute the following command to load the new settings:
sudo sysctl -p
fs.inotify.max_user_instances should be at least twice the number of logical cores for Gateways. See the full product documentation.
Note
SELinux settings on the host need updates to let certain StorageFabric features work properly on the container.
Bandwidth and API Rate Limits
To let the the full product documentation feature work on the container if using Podman and if SELinux is enabled on the host, please run the following command on the host:
setsebool -P domain_kernel_load_modules 1
Note
To run the container in the foreground, remove the -d option.
Wait a few moments for StorageFabric to start up before connecting to the StorageFabric Configuration Manager or StorageFabric Gateway.
Logs can be retrieved using the below commands.
Docker
# stdout and stderr
docker logs [-f] storagefabric
# Configuration logs
docker exec -it storagefabric bash -c 'cat /var/log/storagefabric/ansible/ansible*log'
The -f flag can be used to follow the container output
Podman
# stdout and stderr
podman logs [-f] storagefabric
# Configuration logs
podman exec -it storagefabric bash -c 'cat /var/log/storagefabric/ansible/ansible*log'
The -f flag can be used to follow the container output
Once the StorageFabric container is running, use the endpoints described in the following table:
Endpoint |
Description |
|---|---|
The StorageFabric Configuration Manager web interface and API endpoint. To login to the web interface, use the user admin and the password that you set in configuration_manager.yml. |
|
The StorageFabric Gateway endpoint. Use this endpoint for unified S3 data operations across all backend providers. |
Configure StorageFabric (Create Virtual Buckets)
StorageFabric can be configured via the Command Line Interface (CLI), the StorageFabric Web Interface, the the full product documentation. In this tutorial, we will use the web interface.
Login to the web interface using the username admin and the password that was set in configuration_manager.yml. Once logged in to the Web Interface, continue to configure StorageFabric as follows.
1. Add access keys for your Virtual Bucket to StorageFabric configuration.
Access keys for Virtual Buckets are also referred to as Provider Access Keys. Provider Access Keys are used by the StorageFabric Gateway to access the backend bucket.
Amazon
From the left-navigation bar, expand BACKENDS, OBJECT STORAGE and click Provider Access Keys.
Click the button Add Provider Access Key.
In the Access Key ID field, enter your <AWS_ACCESS_KEY_ID>.
In the Secret Access Key field, enter your <AWS_SECRET_ACCESS_KEY>.
If you are using temporary Provider Access Keys, also enter your Session Token in the Session Token field.
Leave the field Lifetime in minutes blank.
Check the option Admin Key. Checking this option means that StorageFabric can also use these credentials to create buckets with Amazon.
Click the button Add.
From the left-navigation bar, expand BACKENDS, OBJECT STORAGE and click Provider Access Keys.
Click the button Add Provider Access Key.
In the Access Key ID field, enter your <GOOGLE_ACCESS_KEY_ID>.
In the Secret Access Key field, enter your <GOOGLE_SECRET_ACCESS_KEY>.
Leave the field Session Token blank.
Leave the field Lifetime in minutes blank.
Check the option Admin Key. Checking this option means that StorageFabric can also use these credentials to create buckets with Google.
Click the button Add.
Azure
From the left-navigation bar, expand BACKENDS, OBJECT STORAGE and click Provider Access Keys.
Click the button Add Provider Access Key.
In the Access Key ID field, enter your <AZURE_STORAGE_ACCOUNT>.
In the Secret Access Key field, enter your <AZURE_ACCESS_KEY>.
Leave the field Session Token blank.
Leave the field Lifetime in minutes blank.
Check the option Admin Key. Checking this option means that StorageFabric can also use these credentials to create containers with Azure.
Click the button Add.
2. Create a Virtual Bucket.
Amazon
From the left-navigation bar, expand STORAGE and click Virtual Buckets.
Cick the button Create Virtual Bucket to open the Create Bucket form.
In the Virtual Bucket Name field, enter virtual-data-bucket. This is the Virtual Bucket Name.
In the Provider Name field, select amazon.
In the Backend Bucket Name field, enter the name of your AWS Backend Bucket. Note that this should be different than your Configuration Bucket. To automatically create the bucket with AWS, check the option Create Backend Bucket. If bucket is already created at AWS, uncheck the option Create Backend Bucket.
Select the Provider Credentials tab.
In the Provider Access Key ID field, select your <AWS_ACCESS_KEY_ID>.
Click the button Create.
From the left-navigation bar, expand STORAGE and click Virtual Buckets.
Cick the button Create Virtual Bucket to open the Create Bucket form.
In the Virtual Bucket Name field, enter virtual-data-bucket. This is the Virtual Bucket Name.
In the Provider Name field, select google.
In the Backend Bucket Name field, enter the name of your Google Backend Bucket. Note that this should be different than your Configuration Bucket. To automatically create the bucket with Google, check the option Create Backend Bucket. If bucket is already created at Google, uncheck the option Create Backend Bucket.
Select the Provider Credentials tab.
In the Provider Access Key ID field, select your <GOOGLE_ACCESS_KEY_ID>.
Click the button Create.
Azure
From the left-navigation bar, expand BACKENDS and click Providers
Cick the button Add Provider to open the Add Provider form.
In the Provider Name field, enter your <AZURE_STORAGE_ACCOUNT>
In the Provider Base URL field, enter your <AZURE_STORAGE_ACCOUNT>.blob.core.windows.net
In the Country field, enter US.
Leave the remaining fields in the Connections tab as default.
Select the API Settings tab.
In the API Type field, select AZURE.
Uncheck the Tail Range Supported checkbox.
In the Multipart Mode field, select Disabled.
Click the button Add.
From the left-navigation bar, expand STORAGE and click Virtual Buckets.
Cick the button Create Virtual Bucket to open the Create Bucket form.
In the Virtual Bucket Name field, enter virtual-data-bucket. This is the Virtual Bucket Name.
In the Provider Name field, select your <AZURE_STORAGE_ACCOUNT>.
In the Backend Bucket Name field, enter your Azure container name. Note that this should be different than your Configuration Bucket. To automatically create the container with Azure, check the option Create Backend Bucket. If container is already created at Azure, uncheck the option Create Backend Bucket.
Select the Provider Credentials tab.
In the Provider Access Key ID field, select your <AZURE_STORAGE_ACCOUNT>.
Click the button Create.
To learn more about bucket modes, refer to the full product documentation.
3. Create virtual access keys for Client.
Access keys used by clients to authenticate to the StorageFabric Gateway are referred to as Client Access Keys.
From the left-navigation bar, expand IDENTITY and click Client Access Keys.
Click the button Create Client Access Key to open the Create Client Access key form.
Click the button Create.
A new Access Key ID and Secret Access Key will be displayed. Copy and save them. We can use them to upload/download data to/from our Virtual Buckets.
Upload data using StorageFabric
In this guide, we will use s3cmd as the client-side tool. Many other existing client-side tools can be used with StorageFabric. For more details, refer to the tutorial the full product documentation.
First, create a sample file using the command:
touch mydata.txt
Then, upload the sample file to your virtual-data-bucket using the following command with the Client Access Keys generated earlier.
s3cmd put mydata.txt s3://virtual-data-bucket/ \
--host localhost:8080 \
--host-bucket localhost:8080 \
--access_key <CLIENT_ACCESS_KEY_ID> \
--secret_key <CLIENT_SECRET_ACCESS_KEY> \
--signature-v2 --no-ssl
For remote clients, first add your StorageFabric Gateway Domain to clients’ /etc/hosts file or set up DNS for your StorageFabric Gateway Domain. Then, use your StorageFabric Gateway Domain to access cloud storage services via the StorageFabric Gateway.
s3cmd put mydata.txt s3://virtual-data-bucket/ \
--host <GATEWAY_DOMAIN>:8080 \
--host-bucket <GATEWAY_DOMAIN>:8080 \
--access_key <CLIENT_ACCESS_KEY_ID> \
--secret_key <CLIENT_SECRET_ACCESS_KEY> \
--signature-v2 --no-ssl
Note
To use SSL connections between clients and the StorageFabric Gateway, refer to the tutorial the full product documentation.