At first, create a table for our experiments
create table example (
id bigserial,
data character varying(64) not null,
created_at timestamp without time zone DEFAULT now(),
primary key (id)
);At second we should to initialize chunks that later if needed can be moved on other shards:
select create_chunks('example', 1, 4);
create_chunks
---------------
{1,2,3,4}
(1 row)And thirdly each chunk can be partitioned based on the column with timestamp:
select init_partition(
'example_' || trim(to_char(chunk_id, '000')), 'created_at', 'hourly')
from chunks where relation_id = 'example'::regclass;
init_partition
----------------
example_001
example_002
example_003
example_004
(4 rows)Finally, try to insert some data and look what will be in result
insert into example_001 (data, created_at) values('test data', timestamp '2016-01-12 17:00:00');
insert into example_002 (data, created_at) values('test data', timestamp '2016-02-12 18:00:00');
insert into example_001 (data, created_at) values('test data', timestamp '2016-01-12 18:00:00');
insert into example_002 (data, created_at) values('test data', timestamp '2016-02-12 17:00:00');In result we obtain following tables in database:
test-server=> \d
Schema | Name | Type | Owner
--------+-----------------------------+----------+-------------
public | chunks | table | test-server
public | chunks_id_seq | sequence | test-server
public | example | table | test-server
public | example_001 | table | test-server
public | example_001_y16_m01_d12_h17 | table | test-server
public | example_001_y16_m01_d12_h18 | table | test-server
public | example_002 | table | test-server
public | example_002_y16_m02_d12_h17 | table | test-server
public | example_002_y16_m02_d12_h18 | table | test-server
public | example_003 | table | test-server
public | example_004 | table | test-server
public | example_id_seq | sequence | test-server
(12 rows)