...
/Running the Apache ShardingSphere Proxy
Running the Apache ShardingSphere Proxy
Learn how to run Apache ShardingSphere Proxy on the platform using the SPA widget.
In this lesson, we’ll look into how we can use the SPA widget and run the application in that.
Docker Job
Create a Docker Job
With the Docker image built, the next step is to add a Docker job for the image. The Docker job is responsible for creating the actual container. Hence, it is very important to get this step right.
To create a job, click the “Add Job” button:
A menu will be displayed where you can set the details of the job:
Select Docker Job Type
You can use this to select your desired job type. You can choose from the following options depending upon your needs:
- Default
- Live - Important for our use case.
- GUI
Note: We have set the docker job type to live, as this is the one required for our use case.
Job Name
This is simply the name of the job. This name will be used to select the job in a code widget or SPA (more on this later).
Input File Name
The input file is used in the Code widget to determine which file has to be executed.
We can write a dummy file name here as this is irrelevant for our use case.
Run Script
In a Default type Docker job, this script will be run inside your container to execute the input file. Think of it as a set of commands that are run in the terminal of your container.
On the first run, the Run Script is executed before the Start Script, and after the first run, every subsequent run will only execute the commands written in this field (not the Start Script).
The code present in the SPA widget’s editor lives in a directory called /usercode
, which is present in your container’s root directory by default.
If you are serving your application from some other directory rather than the usercode
, for the first time, we can copy it using the Start Script. But, for subsequent runs, you will have to copy the project files to that specific directory from the /usercode
directory using the Run Script.
We will serve our application from the apache-shardingsphere-5.1.0-shardingsphere-proxy-bin
directory which means we have to copy the contents from the usercode
directory every time the user makes any changes to it.
Remember the SPA size limitation, that’s how we can work around this. We upload the files that the user can play around with within the SPA widget (
/usercode
), while the rest of the dependencies are in the directory (apache-shardingsphere-5.1.0-shardingsphere-proxy-bin
).
cp -r /usercode/* /apache-shardingsphere-5.1.0-shardingsphere-proxy-bin/conf/
Application Port
Our platform supports different types of applications. One of the most common use cases is serving web applications using the SPA widget. This is where the Application Port option comes in. You could specify the port where your server will be listening. In our case, we could set it to anything (8080), as we won’t be using it.
HTTPS Strict Mode
Similarly, if your web application server is HTTPS, tick this field.
Force Relaunch On Widget Switch
With this option enabled, if you use the same job on different widgets, a container will be started on each switch. However, on the same widget, the session won’t restart. Check this box if you want the container to restart on every switch.
We do not need the Force Relaunch on Run for this tutorial. We can set it to its default value, i.e.,
false
.
Start Script
This script runs the first time you click the ‘RUN’ button in a SPA. It runs only once, as the first command whenever your container starts. Typically, this script would be used to 'start’ the server.
The code present in the SPA widget’s editor lives in a directory called /usercode
.
So if the user makes any changes to config-sharding.yaml
(which is in /usercode
), they will be automatically updated in the /apache-shardingsphere-5.1.0-shardingsphere-proxy-bin/conf
directory on pressing Run again. We will run our proxy from the /apache-shardingsphere-5.1.0-shardingsphere-proxy-bin
directory.
Note: You can run the server from some other project directory as well. But if you want the user’s changes to take effect, you will have to copy the updated files from the
/usercode
directory to the appropriate project directory.
Our Start Script would look something like this:
export PGCLIENTENCODING='utf-8' && service postgresql start && clear cd apache-shardingsphere-5.1.0-shardingsphere-proxy-bin && ./bin/start.sh 8080
Your container is persistently active for 15 minutes. Hence, the Start Script is executed once the container times out.
Our final Docker job will look something like this:
Now that our Docker job is complete, we can run our app right away.
Select the Docker Job in the SPA
Navigate to your course’s lesson and create a new Single Page Application (SPA) widget. You can find the option in the lesson by clicking on the +
sign.
Once created, select the docker job that you created from the Docker (beta) dropdown menu:
Try executing the application below:
schemaName: sharding_db dataSources: ds_0: url: jdbc:postgresql://127.0.0.1:5432/demo_db username: postgres password: postgres connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 minPoolSize: 1 ds_1: url: jdbc:postgresql://127.0.0.1:5432/demo_db_1 username: postgres password: postgres connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 minPoolSize: 1 rules: - !SHARDING tables: t_order: actualDataNodes: ds_${0..1}.t_order_${0..4} tableStrategy: standard: shardingColumn: order_id shardingAlgorithmName: t_order_inline keyGenerateStrategy: column: order_id keyGeneratorName: snowflake t_order_item: actualDataNodes: ds_${0..1}.t_order_item_${0..4} tableStrategy: standard: shardingColumn: order_id shardingAlgorithmName: t_order_item_inline keyGenerateStrategy: column: order_item_id keyGeneratorName: snowflake bindingTables: - t_order,t_order_item defaultDatabaseStrategy: standard: shardingColumn: user_id shardingAlgorithmName: database_inline defaultTableStrategy: none: shardingAlgorithms: database_inline: type: INLINE props: algorithm-expression: ds_${user_id % 2} t_order_inline: type: INLINE props: algorithm-expression: t_order_${order_id % 2} t_order_item_inline: type: INLINE props: algorithm-expression: t_order_item_${order_id % 2} keyGenerators: snowflake: type: SNOWFLAKE props: worker-id: 123 scalingName: default_scaling scaling: default_scaling: input: workerThread: 40 batchSize: 1000 rateLimiter: type: QPS props: qps: 50 output: workerThread: 40 batchSize: 1000 rateLimiter: type: TPS props: tps: 2000 streamChannel: type: MEMORY props: block-queue-size: 10000 completionDetector: type: IDLE props: incremental-task-idle-minute-threshold: 30 dataConsistencyChecker: type: DATA_MATCH props: chunk-size: 1000
Execute the following command:
psql -h 127.0.0.1 -p 8080 -U postgres sharding_db
Create the tables using the following SQL:
create table t_order(order_id int8 primary key, user_id int8, info text, c1 int, crt_time timestamp);create table t_order_item(order_item_id int8 primary key, order_id int8, user_id int8, info text, c1 int, c2 int, c3 int, c4 int, c5 int, crt_time timestamp);
Now, if we were to execute \dt
in the terminal, we would see five copies of these two tables.
We can go ahead and insert data into the tables and the data will be distributed among the above tables. We can insert random data by executing the following command:
insert into t_order (user_id, info, c1, crt_time) values (0,'a',1,now());
Try executing this multiple times and verify if the rows have been added by following this command:
select * from t_order;
ReadWrite-Splitting
Just like Sharding, we can enable ReadWrite-Splitting on our Postgres databases.
schemaName: readwrite_splitting_db dataSources: primary_ds: url: jdbc:postgresql://127.0.0.1:5432/primary_db username: postgres password: postgres connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 minPoolSize: 1 replica_ds_1: url: jdbc:postgresql://127.0.0.1:5432/replica_db_1 username: postgres password: postgres connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 minPoolSize: 1 replica_ds_2: url: jdbc:postgresql://127.0.0.1:5432/replica_db_2 username: postgres password: postgres connectionTimeoutMilliseconds: 30000 idleTimeoutMilliseconds: 60000 maxLifetimeMilliseconds: 1800000 maxPoolSize: 50 minPoolSize: 1 rules: - !READWRITE_SPLITTING dataSources: readwrite-ds: # Logic data source name of readwrite-splitting type: Dynamic props: auto-aware-data-source-name: mgr_replication_ds write-data-source-name: primary_ds # Write data source name read-data-source-names: replica_ds_1,replica_ds_2 # Read data source names, multiple data source names separated with comma loadBalancerName: ROUND_ROBIN # Load balance algorithm name
Execute the following command:
psql -h 127.0.0.1 -p 8080 -U postgres readwrite_splitting_db
Create the tables using the following SQL:
create table t_order(order_id int8 primary key, user_id int8, info text, c1 int, crt_time timestamp);create table t_order_item(order_item_id int8 primary key, order_id int8, user_id int8, info text, c1 int, c2 int, c3 int, c4 int, c5 int, crt_time timestamp);
We can go ahead and insert data into the tables and the data will be distributed among the above tables. We can insert random data by executing the following command:
insert into t_order (order_id, user_id, info, c1, crt_time) values (0, 0,'a',1,now());
Try executing this multiple times and verify if the rows have been added by following this command:
select * from t_order;
You can verify the route for the queries by reviewing the logs. You can do that by using the following command:
cat apache-shardingsphere-5.1.0-shardingsphere-proxy-bin/logs/stdout.log