Data migration from on-premises to the cloud, in safety and compliance with national laws

The data migration from solutions on-premises al cloud represents a crucial step in the digital evolution of every modern organization. This move enables faster access to information, greater operational flexibility and reduced infrastructure costs. However, the safety and regulatory compliance remain key issues when evaluating a full or partial transition process to the cloud. In this context, it is essential to understand how to save data safely and fully compliance with the normative in force (AgID, ISO 27001 and GDPR provisions).

The cloud, in fact, is often perceived as the right answer to the needs of security, confidentiality, integrity and availability of company data. In reality, cloud solutions are not all the same and indeed in many cases it makes sense to favor one hybrid approach which on the one hand relies on services available outside its corporate infrastructure, but on the other hand also continues to rely on resources available locally, on-premises precisely.

Think trivially of the strategy of backup 3-2-1: it requires that at least one copy of the data is kept outside the perimeter of your organization. In this sense, the cloud represents the key to protecting yourself from accidents and real disasters that may affect data stored locally.

Cubbit: S3-compatible and geo-distributed platform

Cubbit is a platform born in Europe, S3-compatible and geo-distributed. This means it can integrate with the interface of Amazon S3 (Simple Storage Service) and distribute user-owned data across geographically distributed nodes. This is a feature that makes the difference compared to the traditional scheme used by the vast majority of cloud services.

Rather than having your data concentrated in a single data center, Cubbit uses a multilevel approach peer-to-peer which involves the management and storage of each user’s information on multiple physically separate nodes, all located, for example, in Europe:

  • Cryptography. The data is AES-256 encrypted, a military-grade cryptographic algorithm.
  • Division. The encrypted data is fragmented into N pieces, each indistinguishable from the other.
  • Redundancy. The N pieces are multiplied into K fragments through the use of Reed-Solomon error correction codes.
  • Geo-distribution. User data is geo-distributed, avoiding problems caused by storing information in a single point.

S3 Cubbit integrations for data migration to the cloud

Thanks to the many integrations offered by Cubbit, the cloud platform can communicate with a vast array of solutions that are in turn S3 compatible, including Amazon’s own AWS. This means that data can be copied and moved to and from third-party solutions, with a solution highly interoperable which aims at maximum integration and eliminates the costs for moving the data themselves.

How the architecture of Cubbit is made

Cubbit calls swarm the set of nodes that host user data (in Europe there are currently over 2,000): they constitute the backbone of the platform cloud storage and they form a real network peer-to-peer.

The S3 calls coming from the client devices and integrations we referred to previously, are automatically managed by a Gateway S3 which acts as an interface with the swarm through the protocol peer-to-peer used by Cubbit.

Orchestrating, supervising and monitoring the network is the Coordinator, an entity made up of a large number of microservices that plays the role of controller of everything that happens within the platform. It is thehub central responsible fornetwork optimizationimproved fault tolerance and efficient file recovery.

Cubbit architecture

Advantages of the mixed strategy: local plus cloud

The choice between a completely local storage solution or a strategy fromwhich combines cloud and local storage, can significantly impact the security and data management of any professional and business.

While the cloud certainly offers greater accessibility, maintaining a local copy of your data provides an additional layer of security. In the event of cloud service outages or catastrophic events, access to on-premises data can be vital to ensuring business continuity.

With a mixed solution, you can optimize the performance based on specific data management needs. In another article we saw the difference between cold data and hot data: the former, often quite voluminous but rarely used, should be stored using tools (online and offline) capable of guaranteeing a reduced cost per gigabyte. In fact, it is often information that the company is required to keep for a certain number of years for the purposes of legal compliance o tax.

The problem of compliance normative

Among the solutions available for a mixed strategy, Cubbit stands out for its focus on security and digital sovereignty. Cubbit combines the scalability and flexibility of the cloud with compliance normative.

The European legislator is favoring the concept of sovereignty of the datasupporting the use of solutions that allow information to be kept within the borders of the Member States of the Union.

Furthermore, as mentioned previously, the creation of an immutable backup is often required, i.e. an archive whose contents cannot be deleted or modified by anything or anyone. In this sense the functionality Multi-Site Object Lock e Multi-Site Versioning offered by Cubbit allow you to protect company data using a geo-distributed approach.

In the first case, the term Object Lock means that you can create immutable data stores in the cloud. Cubbit thus ensures availability, data integrity and compliance with current regulations data retention. Con Versioninginstead, refers to the fact that the platform can store more versions of the same files keeping track of changes applied over time. It becomes quite simple this way delete the effects of a mistakesuch as an unwanted change to a file or deletion of a folder, and retrace your steps.

How to migrate data using S3

Cubbit has created a single interface to manage files stored on the platform. It is accessible via the Web, with any browser, and allows you to creare bucketi.e. the containers that host the data, to activate object locking e versioningto upload and download files, generate temporary URLs, manage access control lists (ACLs), the digital identities of various users and their policy.

On-premises data migration to the cloud

By clicking on API keysin the left column, you have the possibility to generate some access keys unique (Generate new client API key) usable for access resources stored on the Cubbit platform, starting from various clients and integrations.

API key S3 Cubbit

Configure the AWS CLI to access Cubbit cloud storage

To demonstrate how Cubbit is a fully S3-compatible product, this video published on YouTube explains how to start the data migration from on-premises a cloud using the AWS CLI.

AWS CLI (Command Line Interface) is open source software that also allows you to interact with S3 storage services using the terminal window. Amazon offers the ability to use high-level commands to communicate with the S3-compatible cloud service as well as actual S3 APIs. Regardless of your choice, Cubbit supports both ways of interacting with its cloud storage services.

L’Access key ID and the private key (Secret access key) returned byinterface Cubbit during creation, they can optionally be set at the AWS CLI level as environment variables. The AWS CLI client is compatible with Linux, macOS and Windows (download is possible from this page): with commands export AWS_ACCESS_KEY_ID= ed export AWS_SECRET_ACCESS_KEY= can be defined on systems Linux e macOS, Access key ID e Secret access key to use for remote connection.

Cubbit: AWS CLI environment variables

The same thing can be done on Windows from command prompt or from the window PowerShell using commands respectively setx e $Env: and matching them to environment variables AWS_ACCESS_KEY_ID e AWS_SECRET_ACCESS_KEY.

Setting the environment variable AWS_ENDPOINT_URL are finally, it allows you to indicate to the AWS CLI your desire to access the Cubbit cloud storage platform. In this regard, make sure you have installed the most updated version of the AWS CLI because older ones use the variable AWS_ENDPOINT_URL it is not expected.


Please enter your comment!
Please enter your name here