CREATE TABLE

On this page Carat arrow pointing down

The CREATE TABLE statement creates a new table in a database.

Note:

The CREATE TABLE statement performs a schema change. For more information about how online schema changes work in CockroachDB, see Online Schema Changes.

Required privileges

To create a table, the user must have one of the following:

  • Membership to the admin role for the cluster.
  • Membership to the owner role for the database.
  • The CREATE privilege on the database.

Synopsis

opt_persistence_temp_table ::=

LOCAL GLOBAL TEMPORARY TEMP UNLOGGED

column_def ::=

col_qualification ::=

CONSTRAINT constraint_name NOT NULL VISIBLE NULL UNIQUE PRIMARY KEY USING HASH opt_with_storage_parameter_list CHECK ( a_expr ) DEFAULT ON UPDATE b_expr REFERENCES table_name opt_name_parens key_match reference_actions generated_as ( a_expr ) STORED VIRTUAL GENERATED_ALWAYS ALWAYS GENERATED_BY_DEFAULT BY DEFAULT AS IDENTITY ( opt_sequence_option_list ) COLLATE collation_name FAMILY family_name CREATE FAMILY family_name IF NOT EXISTS FAMILY family_name

index_def ::=

family_def ::=

FAMILY opt_family_name ( name , )

table_constraint ::=

like_table_option_list::=

INCLUDING EXCLUDING CONSTRAINTS DEFAULTS GENERATED INDEXES ALL

opt_with_storage_parameter_list ::=

WITH ( storage_parameter , )

opt_locality ::=

LOCALITY GLOBAL REGIONAL BY TABLE IN region_name PRIMARY REGION ROW AS column_name IN region_name PRIMARY REGION

Tip:
To create a table from the results of a SELECT statement, use CREATE TABLE AS.

Parameters

Parameter Description
opt_persistence_temp_table Defines the table as a session-scoped temporary table. For more information, see Temporary Tables.

Note that the LOCAL, GLOBAL, and UNLOGGED options are no-ops, allowed by the parser for PostgreSQL compatibility.

Support for temporary tables is in preview.
IF NOT EXISTS Create a new table only if a table of the same name does not already exist in the database; if one does exist, do not return an error.

Note that IF NOT EXISTS checks the table name only; it does not check if an existing table has the same columns, indexes, constraints, etc., of the new table.
table_name The name of the table to create, which must be unique within its database and follow these identifier rules. When the parent database is not set as the default, the name must be formatted as database.name.

The UPSERT and INSERT ON CONFLICT statements use a temporary table called excluded to handle uniqueness conflicts during execution. It's therefore not recommended to use the name excluded for any of your tables.
column_def A comma-separated list of column definitions. Each column requires a name/identifier and data type. Column names must be unique within the table but can have the same name as indexes or constraints.

You can optionally specify a column qualification (e.g., a column-level constraint). Any PRIMARY KEY, UNIQUE, and CHECK constraints defined at the column level are moved to the table-level as part of the table's creation. Use the SHOW CREATE statement to view them at the table level.
index_def An optional, comma-separated list of index definitions. For each index, the column(s) to index must be specified; optionally, a name can be specified. Index names must be unique within the table and follow these identifier rules. See the Create a Table with Secondary Indexes and GIN Indexes example below.

For examples, see Create a table with hash-sharded indexes below.

The CREATE INDEX statement can be used to create an index separate from table creation.
family_def An optional, comma-separated list of column family definitions. Column family names must be unique within the table but can have the same name as columns, constraints, or indexes.

A column family is a group of columns that are stored as a single key-value pair in the underlying key-value store. CockroachDB automatically groups columns into families to ensure efficient storage and performance. However, there are cases when you may want to manually assign columns to families. For more details, see Column Families.
table_constraint An optional, comma-separated list of table-level constraints. Constraint names must be unique within the table but can have the same name as columns, column families, or indexes.
LIKE table_name like_table_option_list Create a new table based on the schema of an existing table, using supported specifiers. For details, see Create a table like an existing table. For examples, see Create a new table from an existing one.
opt_partition_by An Enterprise-only option that lets you define table partitions at the row level. You can define table partitions by list or by range. See Define Table Partitions for more information.
opt_locality Specify a locality for the table. In order to set a locality, the table must belong to a multi-region database.

Note that multi-region features require an Enterprise license.
opt_where_clause An optional WHERE clause that defines the predicate boolean expression of a partial index.
opt_with_storage_parameter_list A comma-separated list of spatial index tuning parameters. Supported parameters include fillfactor, s2_max_level, s2_level_mod, s2_max_cells, geometry_min_x, geometry_max_x, geometry_min_y, and geometry_max_y. The fillfactor parameter is a no-op, allowed for PostgreSQL-compatibility.

For details, see Spatial index tuning parameters. For an example, see Create a spatial index that uses all of the tuning parameters.
ON COMMIT PRESERVE ROWS This clause is a no-op, allowed by the parser for PostgreSQL compatibility. CockroachDB only supports session-scoped temporary tables, and does not support the clauses ON COMMIT DELETE ROWS and ON COMMIT DROP, which are used to define transaction-scoped temporary tables in PostgreSQL.

Column qualifications

CockroachDB supports the following column qualifications:

ON UPDATE expressions

ON UPDATE expressions update column values in the following cases:

  • An UPDATE or UPSERT statement modifies a different column value in the same row.
  • An ON UPDATE CASCADE foreign key action modifies a different column value in the same row.

ON UPDATE expressions do not update column values in the following cases:

  • An UPDATE or UPSERT statement directly modifies the value of a column with an ON UPDATE expression.
  • An UPSERT statement creates a new row.
  • A new column is backfilled with values (e.g., by a DEFAULT expression).

Note the following limitations of ON UPDATE expressions:

  • ON UPDATE expressions allow context-dependent expressions, but not expressions that reference other columns. For example, the current_timestamp() built-in function is allowed, but CONCAT(<column_one>, <column_two>) is not.
  • You cannot add a foreign key constraint and an ON UPDATE expression to the same column.

For an example of ON UPDATE, see Add a column with an ON UPDATE expression.

Identity columns

Identity columns are columns that are populated with values in a sequence. When you create an identity column, CockroachDB creates a sequence and sets the default value for the identity column to the result of the nextval() built-in function on the sequence.

To create an identity column, add a GENERATED BY DEFAULT AS IDENTITY/GENERATED ALWAYS AS IDENTITY clause to the column definition, followed by sequence options. If you do not specify any sequence options in the column definition, the column assumes the default options of CREATE SEQUENCE.

If you use GENERATED BY DEFAULT AS IDENTITY to define the identity column, any INSERT/UPSERT/UPDATE operations that specify a new value for the identity column will overwrite the default sequential values in the column. If you use GENERATED ALWAYS AS IDENTITY, the column's sequential values cannot be overwritten.

Note the following limitations of identity columns:

For an example of an identity column, see Create a table with an identity column.

Create a table like an existing table

CockroachDB supports the CREATE TABLE LIKE syntax for creating a new table based on the schema of an existing table.

The following options are supported:

  • INCLUDING CONSTRAINTS adds all CHECK constraints from the source table.
  • INCLUDING DEFAULTS adds all DEFAULT column expressions from the source table.
  • INCLUDING GENERATED adds all computed column expressions from the source table.
  • INCLUDING INDEXES adds all indexes from the source table.
  • INCLUDING ALL includes all of the specifiers above.

To exclude specifiers, use the EXCLUDING keyword. Excluding specifiers can be useful if you want to use INCLUDING ALL, and exclude just one or two specifiers. The last INCLUDING/EXCLUDING keyword for a given specifier takes priority.

Note:

CREATE TABLE LIKE statements cannot copy column families, partitions, and foreign key constraints from existing tables. If you want these column qualifications in a new table, you must recreate them manually.

CREATE TABLE LIKE copies all hidden columns (e.g., the hidden crdb_region column in multi-region tables) from the existing table to the new table.

Supported LIKE specifiers can also be mixed with ordinary CREATE TABLE specifiers. For example:

CREATE TABLE table1 (a INT PRIMARY KEY, b INT NOT NULL DEFAULT 3 CHECK (b > 0), INDEX(b));

CREATE TABLE table2 (LIKE table1 INCLUDING ALL EXCLUDING CONSTRAINTS, c INT, INDEX(b,c));

In this example, table2 is created with the indexes and default values of table1, but not the CHECK constraints, because EXCLUDING CONSTRAINTS was specified after INCLUDING ALL. table2 also includes an additional column and index.

For additional examples, see Create a new table from an existing one.

Examples

Create a table

In this example, we create the users table with a single primary key column defined. In CockroachDB, every table requires a primary key. If one is not explicitly defined, a column called rowid of the type INT is added automatically as the primary key, with the unique_rowid() function used to ensure that new rows always default to unique rowid values. The primary key is automatically indexed.

For performance recommendations on primary keys, see the Schema Design: Create a Table page and the SQL Performance Best Practices page.

Note:

If no primary key is explicitly defined in a CREATE TABLE statement, you can add a primary key to the table with ADD CONSTRAINT ... PRIMARY KEY or ALTER PRIMARY KEY. If the ADD or ALTER statement follows the CREATE TABLE statement, and is part of the same transaction, no default primary key will be created. If the table has already been created and the transaction committed, the ADD or ALTER statements replace the default primary key.

Note:
Strictly speaking, a primary key's unique index is not created; it is derived from the key(s) under which the data is stored, so it takes no additional space. However, it appears as a normal unique index when using commands like SHOW INDEX.
icon/buttons/copy
> CREATE TABLE users (
        id UUID PRIMARY KEY,
        city STRING,
        name STRING,
        address STRING,
        credit_card STRING,
        dl STRING
);
icon/buttons/copy
> SHOW COLUMNS FROM users;
  column_name | data_type | is_nullable | column_default | generation_expression |  indices  | is_hidden
+-------------+-----------+-------------+----------------+-----------------------+-----------+-----------+
  id          | UUID      |    false    | NULL           |                       | {primary} |   false
  city        | VARCHAR   |    false    | NULL           |                       | {primary} |   false
  name        | VARCHAR   |    true     | NULL           |                       | {primary} |   false
  address     | VARCHAR   |    true     | NULL           |                       | {primary} |   false
  credit_card | VARCHAR   |    true     | NULL           |                       | {primary} |   false
  dl          | STRING    |    true     | NULL           |                       | {primary} |   false
(6 rows)
icon/buttons/copy
> SHOW INDEX FROM users;
  table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit
+------------+------------+------------+--------------+-------------+-----------+---------+----------+
  users      | users_pkey |   false    |            1 | city        | ASC       |  false  |  false
  users      | users_pkey |   false    |            2 | id          | ASC       |  false  |  false
  users      | users_pkey |   false    |            3 | name        | N/A       |  true   |  false
  users      | users_pkey |   false    |            4 | address     | N/A       |  true   |  false
  users      | users_pkey |   false    |            5 | credit_card | N/A       |  true   |  false
  users      | users_pkey |   false    |            6 | dl          | N/A       |  true   |  false
(6 rows)

Create a table with secondary and GIN indexes

In this example, we create secondary and GIN indexes during table creation. Secondary indexes allow efficient access to data with keys other than the primary key. GIN indexes allow efficient access to the schemaless data in a JSONB column.

icon/buttons/copy
> CREATE TABLE vehicles (
        id UUID NOT NULL,
        city STRING NOT NULL,
        type STRING,
        owner_id UUID,
        creation_time TIMESTAMP,
        status STRING,
        current_location STRING,
        ext JSONB,
        CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
        INDEX index_status (status),
        INVERTED INDEX ix_vehicle_ext (ext),
        FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext)
);
icon/buttons/copy
> SHOW INDEX FROM vehicles;
  table_name |   index_name   | non_unique | seq_in_index | column_name | direction | storing | implicit
-------------+----------------+------------+--------------+-------------+-----------+---------+-----------
  vehicles   | index_status   |    true    |            1 | status           | ASC       |  false  |  false
  vehicles   | index_status   |    true    |            2 | city             | ASC       |  false  |   true
  vehicles   | index_status   |    true    |            3 | id               | ASC       |  false  |   true
  vehicles   | ix_vehicle_ext |    true    |            1 | ext              | ASC       |  false  |  false
  vehicles   | ix_vehicle_ext |    true    |            2 | city             | ASC       |  false  |   true
  vehicles   | ix_vehicle_ext |    true    |            3 | id               | ASC       |  false  |   true
  vehicles   | vehicles_pkey  |   false    |            1 | city             | ASC       |  false  |  false
  vehicles   | vehicles_pkey  |   false    |            2 | id               | ASC       |  false  |  false
  vehicles   | vehicles_pkey  |   false    |            3 | type             | N/A       |  true   |  false
  vehicles   | vehicles_pkey  |   false    |            4 | owner_id         | N/A       |  true   |  false
  vehicles   | vehicles_pkey  |   false    |            5 | creation_time    | N/A       |  true   |  false
  vehicles   | vehicles_pkey  |   false    |            6 | status           | N/A       |  true   |  false
  vehicles   | vehicles_pkey  |   false    |            7 | current_location | N/A       |  true   |  false
  vehicles   | vehicles_pkey  |   false    |            8 | ext              | N/A       |  true   |  false
(14 rows)

We also have other resources on indexes:

Create a table with auto-generated unique row IDs

To auto-generate unique row identifiers, use the UUID column with the gen_random_uuid() function as the default value:

icon/buttons/copy
> CREATE TABLE users (
        id UUID NOT NULL DEFAULT gen_random_uuid(),
        city STRING NOT NULL,
        name STRING NULL,
        address STRING NULL,
        credit_card STRING NULL,
        CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
        FAMILY "primary" (id, city, name, address, credit_card)
);
icon/buttons/copy
> INSERT INTO users (name, city) VALUES ('Petee', 'new york'), ('Eric', 'seattle'), ('Dan', 'seattle');
icon/buttons/copy
> SELECT * FROM users;
                   id                  |   city   | name  | address | credit_card
+--------------------------------------+----------+-------+---------+-------------+
  cf8ee4e2-cd74-449a-b6e6-a0fb2017baa4 | new york | Petee | NULL    | NULL
  2382564e-702f-42d9-a139-b6df535ae00a | seattle  | Eric  | NULL    | NULL
  7d27e40b-263a-4891-b29b-d59135e55650 | seattle  | Dan   | NULL    | NULL
(3 rows)

Alternatively, you can use the BYTES column with the uuid_v4() function as the default value instead:

icon/buttons/copy
> CREATE TABLE users2 (
        id BYTES DEFAULT uuid_v4(),
        city STRING NOT NULL,
        name STRING NULL,
        address STRING NULL,
        credit_card STRING NULL,
        CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
        FAMILY "primary" (id, city, name, address, credit_card)
);
icon/buttons/copy
> INSERT INTO users2 (name, city) VALUES ('Anna', 'new york'), ('Jonah', 'seattle'), ('Terry', 'chicago');
icon/buttons/copy
> SELECT * FROM users;
                        id                       |   city   | name  | address | credit_card
+------------------------------------------------+----------+-------+---------+-------------+
  4\244\277\323/\261M\007\213\275*\0060\346\025z | chicago  | Terry | NULL    | NULL
  \273*t=u.F\010\274f/}\313\332\373a             | new york | Anna  | NULL    | NULL
  \004\\\364nP\024L)\252\364\222r$\274O0         | seattle  | Jonah | NULL    | NULL
(3 rows)

In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 512 MiB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load.

This approach has the disadvantage of creating a primary key that may not be useful in a query directly, which can require a join with another table or a secondary index.

If it is important for generated IDs to be stored in the same key-value range, you can use an integer type with the unique_rowid() function as the default value, either explicitly or via the SERIAL pseudo-type:

icon/buttons/copy
> CREATE TABLE users3 (
        id INT DEFAULT unique_rowid(),
        city STRING NOT NULL,
        name STRING NULL,
        address STRING NULL,
        credit_card STRING NULL,
        CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
        FAMILY "primary" (id, city, name, address, credit_card)
);
icon/buttons/copy
> INSERT INTO users3 (name, city) VALUES ('Blake', 'chicago'), ('Hannah', 'seattle'), ('Bobby', 'seattle');
icon/buttons/copy
> SELECT * FROM users3;
          id         |  city   |  name  | address | credit_card
+--------------------+---------+--------+---------+-------------+
  469048192112197633 | chicago | Blake  | NULL    | NULL
  469048192112263169 | seattle | Hannah | NULL    | NULL
  469048192112295937 | seattle | Bobby  | NULL    | NULL
(3 rows)

Upon insert or upsert, the unique_rowid() function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed.

For further background on UUIDs, see What is a UUID, and Why Should You Care?.

Create a table with a foreign key constraint

Foreign key constraints guarantee a column uses only values that already exist in the column it references, which must be from another table. This constraint enforces referential integrity between the two tables.

There are a number of rules that govern foreign keys, but the most important rule is that referenced columns must contain only unique values. This means the REFERENCES clause must use exactly the same columns as a primary key or unique constraint.

You can include a foreign key action to specify what happens when a column referenced by a foreign key constraint is updated or deleted. The default actions are ON UPDATE NO ACTION and ON DELETE NO ACTION.

In this example, we use ON DELETE CASCADE (i.e., when row referenced by a foreign key constraint is deleted, all dependent rows are also deleted).

icon/buttons/copy
> CREATE TABLE users (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
        city STRING,
        name STRING,
        address STRING,
        credit_card STRING,
        dl STRING UNIQUE CHECK (LENGTH(dl) < 8)
);
icon/buttons/copy
> CREATE TABLE vehicles (
        id UUID NOT NULL DEFAULT gen_random_uuid(),
        city STRING NOT NULL,
        type STRING,
        owner_id UUID REFERENCES users(id) ON DELETE CASCADE,
        creation_time TIMESTAMP,
        status STRING,
        current_location STRING,
        ext JSONB,
        CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
        INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC),
        INVERTED INDEX ix_vehicle_ext (ext),
        FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext)
);
icon/buttons/copy
> SHOW CREATE TABLE vehicles;
  table_name |                                          create_statement
+------------+-----------------------------------------------------------------------------------------------------+
  vehicles   | CREATE TABLE vehicles (
             |     id UUID NOT NULL DEFAULT gen_random_uuid(),
             |     city STRING NOT NULL,
             |     type STRING NULL,
             |     owner_id UUID NULL,
             |     creation_time TIMESTAMP NULL,
             |     status STRING NULL,
             |     current_location STRING NULL,
             |     ext JSONB NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
             |     INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC),
             |     INVERTED INDEX ix_vehicle_ext (ext),
             |     CONSTRAINT fk_owner_id_ref_users FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE,
             |     INDEX vehicles_auto_index_fk_owner_id_ref_users (owner_id ASC),
             |     FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext)
             | )
(1 row)
icon/buttons/copy
> INSERT INTO users (name, dl) VALUES ('Annika', 'ABC-123');
icon/buttons/copy
> SELECT * FROM users;
                   id                  | city |  name  | address | credit_card |   dl
+--------------------------------------+------+--------+---------+-------------+---------+
  26da1fce-59e1-4290-a786-9068242dd195 | NULL | Annika | NULL    | NULL        | ABC-123
(1 row)
icon/buttons/copy
> INSERT INTO vehicles (city, owner_id) VALUES ('seattle', '26da1fce-59e1-4290-a786-9068242dd195');
icon/buttons/copy
> SELECT * FROM vehicles;
                   id                  |  city   | type |               owner_id               | creation_time | status | current_location | ext
+--------------------------------------+---------+------+--------------------------------------+---------------+--------+------------------+------+
  fc6f7a8c-4ba9-42e1-9c37-7be3c906050c | seattle | NULL | 26da1fce-59e1-4290-a786-9068242dd195 | NULL          | NULL   | NULL             | NULL
(1 row)
icon/buttons/copy
> DELETE FROM users WHERE id = '26da1fce-59e1-4290-a786-9068242dd195';
icon/buttons/copy
> SELECT * FROM vehicles;
  id | city | type | owner_id | creation_time | status | current_location | ext
+----+------+------+----------+---------------+--------+------------------+-----+
(0 rows)

Create a table with a check constraint

In this example, we create the users table, but with some column constraints. One column is the primary key, and another column is given a unique constraint and a check constraint that limits the length of the string. Primary key columns and columns with unique constraints are automatically indexed.

icon/buttons/copy
> CREATE TABLE users (
        id UUID PRIMARY KEY,
        city STRING,
        name STRING,
        address STRING,
        credit_card STRING,
        dl STRING UNIQUE CHECK (LENGTH(dl) < 8)
);
icon/buttons/copy
> SHOW COLUMNS FROM users;
  column_name | data_type | is_nullable | column_default | generation_expression |           indices           | is_hidden
--------------+-----------+-------------+----------------+-----------------------+-----------------------------+------------
  id          | UUID      |    false    | NULL           |                       | {users_name_idx,users_pkey} |   false
  city        | VARCHAR   |    false    | NULL           |                       | {users_name_idx,users_pkey} |   false
  name        | VARCHAR   |    true     | NULL           |                       | {users_name_idx,users_pkey} |   false
  address     | VARCHAR   |    true     | NULL           |                       | {users_pkey}                |   false
  credit_card | VARCHAR   |    true     | NULL           |                       | {users_pkey}                |   false
  dl          | STRING    |    true     | NULL           |                       | {users_dl_key}              |   false
(6 rows)
icon/buttons/copy
> SHOW INDEX FROM users;
  table_name |  index_name  | non_unique | seq_in_index | column_name | direction | storing | implicit
+------------+--------------+------------+--------------+-------------+-----------+---------+----------+
  users      | users_pkey   |   false    |            1 | id          | ASC       |  false  |  false
  users      | users_dl_key |   false    |            1 | dl          | ASC       |  false  |  false
  users      | users_dl_key |   false    |            2 | id          | ASC       |  false  |   true
(3 rows)

Create a table that mirrors key-value storage

CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key:

> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES);

When such a "simple" table has no indexes or foreign keys, INSERT/UPSERT/UPDATE/DELETE statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following UPSERT to add or replace a row in the table would translate into a single key-value Put operation:

> UPSERT INTO kv VALUES (1, b'hello')

This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises.

Create a table from a SELECT statement

You can use the CREATE TABLE AS statement to create a new table from the results of a SELECT statement. For example, suppose you have a number of rows of user data in the users table, and you want to create a new table from the subset of users that are located in New York.

icon/buttons/copy
> SELECT * FROM users WHERE city = 'new york';
                   id                  |   city   |       name       |           address           | credit_card
+--------------------------------------+----------+------------------+-----------------------------+-------------+
  00000000-0000-4000-8000-000000000000 | new york | Robert Murphy    | 99176 Anderson Mills        | 8885705228
  051eb851-eb85-4ec0-8000-000000000001 | new york | James Hamilton   | 73488 Sydney Ports Suite 57 | 8340905892
  0a3d70a3-d70a-4d80-8000-000000000002 | new york | Judy White       | 18580 Rosario Ville Apt. 61 | 2597958636
  0f5c28f5-c28f-4c00-8000-000000000003 | new york | Devin Jordan     | 81127 Angela Ferry Apt. 8   | 5614075234
  147ae147-ae14-4b00-8000-000000000004 | new york | Catherine Nelson | 1149 Lee Alley              | 0792553487
(5 rows)
icon/buttons/copy
> CREATE TABLE users_ny AS SELECT * FROM users WHERE city = 'new york';
icon/buttons/copy
> SELECT * FROM users_ny;
                   id                  |   city   |       name       |           address           | credit_card
+--------------------------------------+----------+------------------+-----------------------------+-------------+
  00000000-0000-4000-8000-000000000000 | new york | Robert Murphy    | 99176 Anderson Mills        | 8885705228
  051eb851-eb85-4ec0-8000-000000000001 | new york | James Hamilton   | 73488 Sydney Ports Suite 57 | 8340905892
  0a3d70a3-d70a-4d80-8000-000000000002 | new york | Judy White       | 18580 Rosario Ville Apt. 61 | 2597958636
  0f5c28f5-c28f-4c00-8000-000000000003 | new york | Devin Jordan     | 81127 Angela Ferry Apt. 8   | 5614075234
  147ae147-ae14-4b00-8000-000000000004 | new york | Catherine Nelson | 1149 Lee Alley              | 0792553487
(5 rows)

Create a table with a computed column

In this example, let's create a simple table with a computed column:

icon/buttons/copy
> CREATE TABLE users (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
        city STRING,
        first_name STRING,
        last_name STRING,
        full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED,
        address STRING,
        credit_card STRING,
        dl STRING UNIQUE CHECK (LENGTH(dl) < 8)
);

Then, insert a few rows of data:

icon/buttons/copy
> INSERT INTO users (first_name, last_name) VALUES
    ('Lola', 'McDog'),
    ('Carl', 'Kimball'),
    ('Ernie', 'Narayan');
icon/buttons/copy
> SELECT * FROM users;
                   id                  | city | first_name | last_name |   full_name   | address | credit_card |  dl
+--------------------------------------+------+------------+-----------+---------------+---------+-------------+------+
  5740da29-cc0c-47af-921c-b275d21d4c76 | NULL | Ernie      | Narayan   | Ernie Narayan | NULL    | NULL        | NULL
  e7e0b748-9194-4d71-9343-cd65218848f0 | NULL | Lola       | McDog     | Lola McDog    | NULL    | NULL        | NULL
  f00e4715-8ca7-4d5a-8de5-ef1d5d8092f3 | NULL | Carl       | Kimball   | Carl Kimball  | NULL    | NULL        | NULL
(3 rows)

The full_name column is computed from the first_name and last_name columns without the need to define a view.

Create a table with a hash-sharded primary index

For performance reasons, we discourage indexing on sequential keys. If, however, you are working with a table that must be indexed on sequential keys, you should use hash-sharded indexes. Hash-sharded indexes distribute sequential traffic uniformly across ranges, eliminating single-range hot spots and improving write performance on sequentially-keyed indexes at a small cost to read performance.

Let's create the products table and add a hash-sharded primary key on the ts column:

icon/buttons/copy
> CREATE TABLE products (
    ts DECIMAL PRIMARY KEY USING HASH,
    product_id INT8
    );
icon/buttons/copy
> SHOW INDEX FROM products;
  table_name |  index_name   | non_unique | seq_in_index |        column_name        | direction | storing | implicit
-------------+---------------+------------+--------------+---------------------------+-----------+---------+-----------
  products   | products_pkey |   false    |            1 | crdb_internal_ts_shard_16 | ASC       |  false  |   true
  products   | products_pkey |   false    |            2 | ts                        | ASC       |  false  |  false
  products   | products_pkey |   false    |            3 | product_id                | N/A       |  true   |  false
(3 rows)
icon/buttons/copy
> SHOW COLUMNS FROM products;
         column_name        | data_type | is_nullable | column_default |               generation_expression               |     indices     | is_hidden
----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------+------------
  crdb_internal_ts_shard_16 | INT8      |    false    | NULL           | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {products_pkey} |   true
  ts                        | DECIMAL   |    false    | NULL           |                                                   | {products_pkey} |   false
  product_id                | INT8      |    true     | NULL           |                                                   | {products_pkey} |   false
(3 rows)

Create a table with a hash-sharded secondary index

Let's now create the events table and add a secondary index on the ts column in a single statement:

icon/buttons/copy
> CREATE TABLE events (
    product_id INT8,
    owner UUID,
    serial_number VARCHAR,
    event_id UUID,
    ts TIMESTAMP,
    data JSONB,
    PRIMARY KEY (product_id, owner, serial_number, ts, event_id),
    INDEX (ts) USING HASH
);
icon/buttons/copy
> SHOW INDEX FROM events;
  table_name |  index_name   | non_unique | seq_in_index |        column_name        | direction | storing | implicit
-------------+---------------+------------+--------------+---------------------------+-----------+---------+-----------
  events     | events_pkey   |   false    |            1 | product_id                | ASC       |  false  |  false
  events     | events_pkey   |   false    |            2 | owner                     | ASC       |  false  |  false
  events     | events_pkey   |   false    |            3 | serial_number             | ASC       |  false  |  false
  events     | events_pkey   |   false    |            4 | ts                        | ASC       |  false  |  false
  events     | events_pkey   |   false    |            5 | event_id                  | ASC       |  false  |  false
  events     | events_pkey   |   false    |            6 | data                      | N/A       |  true   |  false
  events     | events_ts_idx |    true    |            1 | crdb_internal_ts_shard_16 | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            2 | ts                        | ASC       |  false  |  false
  events     | events_ts_idx |    true    |            3 | product_id                | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            4 | owner                     | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            5 | serial_number             | ASC       |  false  |   true
  events     | events_ts_idx |    true    |            6 | event_id                  | ASC       |  false  |   true
(12 rows)
icon/buttons/copy
> SHOW COLUMNS FROM events;
         column_name        | data_type | is_nullable | column_default |               generation_expression               |           indices           | is_hidden
----------------------------+-----------+-------------+----------------+---------------------------------------------------+-----------------------------+------------
  product_id                | INT8      |    false    | NULL           |                                                   | {events_pkey,events_ts_idx} |   false
  owner                     | UUID      |    false    | NULL           |                                                   | {events_pkey,events_ts_idx} |   false
  serial_number             | VARCHAR   |    false    | NULL           |                                                   | {events_pkey,events_ts_idx} |   false
  event_id                  | UUID      |    false    | NULL           |                                                   | {events_pkey,events_ts_idx} |   false
  ts                        | TIMESTAMP |    false    | NULL           |                                                   | {events_pkey,events_ts_idx} |   false
  data                      | JSONB     |    true     | NULL           |                                                   | {events_pkey}               |   false
  crdb_internal_ts_shard_16 | INT8      |    false    | NULL           | mod(fnv32(crdb_internal.datums_to_bytes(ts)), 16) | {events_ts_idx}             |   true
(7 rows)

Create a new table from an existing one

Create a table including all supported source specifiers

icon/buttons/copy
> SHOW CREATE TABLE vehicles;
  table_name |                                              create_statement
-------------+-------------------------------------------------------------------------------------------------------------
  vehicles   | CREATE TABLE public.vehicles (
             |     id UUID NOT NULL DEFAULT gen_random_uuid(),
             |     city STRING NOT NULL,
             |     type STRING NULL,
             |     owner_id UUID NULL,
             |     creation_time TIMESTAMP NULL,
             |     status STRING NULL,
             |     current_location STRING NULL,
             |     ext JSONB NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
             |     CONSTRAINT fk_owner_id_ref_users FOREIGN KEY (owner_id) REFERENCES public.users(id) ON DELETE CASCADE,
             |     INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC),
             |     INVERTED INDEX ix_vehicle_ext (ext),
             |     FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext)
             | )
(1 row
icon/buttons/copy
> CREATE TABLE vehicles2 (
        LIKE vehicles INCLUDING ALL
);
icon/buttons/copy
> SHOW CREATE TABLE vehicles2;
  table_name |                                       create_statement
-------------+------------------------------------------------------------------------------------------------
  vehicles2  | CREATE TABLE public.vehicles2 (
             |     id UUID NOT NULL DEFAULT gen_random_uuid(),
             |     city STRING NOT NULL,
             |     type STRING NULL,
             |     owner_id UUID NULL,
             |     creation_time TIMESTAMP NULL,
             |     status STRING NULL,
             |     current_location STRING NULL,
             |     ext JSONB NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
             |     INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC),
             |     INVERTED INDEX ix_vehicle_ext (ext),
             |     FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext)
             | )
(1 row)

Note that the foreign key constraint fk_owner_id_ref_users in the source table is not included in the new table.

Create a table with some source specifiers and a foreign key constraint

icon/buttons/copy
> CREATE TABLE vehicles3 (
        LIKE vehicles INCLUDING DEFAULTS INCLUDING INDEXES,
        CONSTRAINT fk_owner_id_ref_users FOREIGN KEY (owner_id) REFERENCES public.users(id) ON DELETE CASCADE
);
icon/buttons/copy
> SHOW CREATE TABLE vehicles3;
  table_name |                                              create_statement
-------------+-------------------------------------------------------------------------------------------------------------
  vehicles3  | CREATE TABLE public.vehicles3 (
             |     id UUID NOT NULL DEFAULT gen_random_uuid(),
             |     city STRING NOT NULL,
             |     type STRING NULL,
             |     owner_id UUID NULL,
             |     creation_time TIMESTAMP NULL,
             |     status STRING NULL,
             |     current_location STRING NULL,
             |     ext JSONB NULL,
             |     CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC),
             |     CONSTRAINT fk_owner_id_ref_users FOREIGN KEY (owner_id) REFERENCES public.users(id) ON DELETE CASCADE,
             |     INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC),
             |     INVERTED INDEX ix_vehicle_ext (ext),
             |     FAMILY "primary" (id, city, type, owner_id, creation_time, status, current_location, ext)
             | )
(1 row)

Create a table in a multi-region database

To create a table with a specific table locality in a multi-region database, add a LOCALITY clause to the end of the table's CREATE TABLE statement.

Note:

In order to set table localities, the database that contains the table must have database regions.

By default, all tables in a multi-region database have a REGIONAL BY TABLE IN PRIMARY REGION locality.

Create a table with a global locality

To create a table with a GLOBAL locality, add a LOCALITY GLOBAL clause to the end of the CREATE TABLE statement.

The GLOBAL locality is useful for "read-mostly" tables of reference data that are rarely updated, but need to be read with low latency from all regions.

For example, the promo_codes table of the movr database is rarely updated after being initialized, but it needs to be read by nodes in all regions.

icon/buttons/copy
> CREATE TABLE promo_codes (
    code STRING PRIMARY KEY,
    description STRING,
    creation_time TIMESTAMP,
    expiration_time TIMESTAMP,
    rules JSONB)
    LOCALITY GLOBAL;
icon/buttons/copy
> WITH x AS (SHOW TABLES)
SELECT * FROM x WHERE table_name='promo_codes';
  schema_name | table_name  | type  | owner | estimated_row_count | locality
--------------+-------------+-------+-------+---------------------+-----------
  public      | promo_codes | table | demo  |                   0 | GLOBAL
(1 row)

Create a table with a regional-by-table locality

To create a table with a REGIONAL BY TABLE locality, add a LOCALITY REGIONAL BY TABLE clause to the end of the CREATE TABLE statement.

Note:

REGIONAL BY TABLE IN PRIMARY REGION is the default locality for all tables created in a multi-region database.

The REGIONAL BY TABLE locality is useful for tables that require low-latency reads and writes from specific region.

For example, suppose you want to create a table for your application's end users in a specific state:

icon/buttons/copy
> CREATE TABLE users_ny (
    id UUID PRIMARY KEY,
    name STRING,
    address STRING)
    LOCALITY REGIONAL BY TABLE IN "us-east1";
icon/buttons/copy
> WITH x AS (SHOW TABLES) SELECT * FROM x WHERE table_name='users_ny';
  schema_name | table_name | type  | owner | estimated_row_count |            locality
--------------+------------+-------+-------+---------------------+----------------------------------
  public      | users_ny   | table | demo  |                   0 | REGIONAL BY TABLE IN "us-east1"
(1 row)
Tip:

LOCALITY REGIONAL is an alias for LOCALITY REGIONAL BY TABLE.

Create a table with a regional-by-row locality

To create a table with a REGIONAL-BY-ROW locality, add a LOCALITY REGIONAL BY ROW clause to the end of the CREATE TABLE statement.

The REGIONAL BY ROW locality is useful for tables that require low-latency reads and writes from different regions, where the low-latency region is specified at the row level.

For example, the vehicles table of the movr database is read to and written from nodes in different regions.

icon/buttons/copy
> CREATE TABLE vehicles (
    id UUID PRIMARY KEY,
    type STRING,
    city STRING,
    owner_id UUID,
    creation_time TIMESTAMP,
    status STRING,
    current_location STRING,
    ext JSONB)
    LOCALITY REGIONAL BY ROW;

CockroachDB will automatically assign each row to a region based on the locality of the node from which the row is inserted. It will then optimize subsequent read and write queries executed from nodes located in the region assigned to the rows being queried.

Note:

If the node from which a row is inserted has a locality that does not correspond to a region in the database, then the row will be assigned to the database's primary region.

To assign rows to regions, CockroachDB creates and manages a hidden crdb_region column, of ENUM type crdb_internal_region. To override the automatic region assignment and choose the region in which rows will be placed, you can provide a value for the crdb_region column in INSERT and UPDATE queries on the table.

Note:

The region value for crdb_region must be one of the regions added to the database, and present in the crdb_internal_region ENUM. To return the available regions, use a SHOW REGIONS FROM DATABASE <database name> statement, or a SHOW ENUMS statement.

For example:

icon/buttons/copy
> CREATE TABLE vehicles (
    id UUID PRIMARY KEY,
    type STRING,
    city STRING,
    owner_id UUID,
    creation_time TIMESTAMP,
    status STRING,
    current_location STRING,
    ext JSONB)
    LOCALITY REGIONAL BY ROW;
> SHOW REGIONS FROM DATABASE movr;
  database |    region    | primary |  zones
-----------+--------------+---------+----------
  movr     | us-east1     |  true   | {b,c,d}
  movr     | europe-west1 |  false  | {b,c,d}
  movr     | us-west1     |  false  | {a,b,c}
(3 rows)
> SHOW ENUMS;
  schema |         name         |              values              | owner
---------+----------------------+----------------------------------+--------
  public | crdb_internal_region | {europe-west1,us-east1,us-west1} | root
(1 row)

You can then manually set the values of the region with each INSERT statement:

icon/buttons/copy
> INSERT INTO vehicles (crdb_region, ...) VALUES ('us-east1', ...);

Alternatively, you could update the rows in the crdb_region column to compute the region based on the value of another column, like the city column.

> UPDATE vehicles SET crdb_region = 'us-east1' WHERE city IN (...) ...

Create a table with a regional-by-row locality, using a custom region column

To create a table with a REGIONAL-BY-ROW locality, where the region of each row in a table is based on the value of a specific column that you create, you can add a LOCALITY REGIONAL BY ROW AS <region> clause to the end of the CREATE TABLE statement.

Using the LOCALITY REGIONAL BY ROW AS <region> clause, you can assign rows to regions based on the value of any custom column of type crdb_internal_region.

For example:

icon/buttons/copy
> CREATE TABLE vehicles (
    id UUID PRIMARY KEY,
    type STRING,
    city STRING,
    region crdb_internal_region AS (
      CASE
        WHEN city IN ('new york', 'boston', 'washington dc', 'chicago', 'detroit', 'minneapolis') THEN 'us-east1'
        WHEN city IN ('san francisco', 'seattle', 'los angeles') THEN 'us-west1'
        WHEN city IN ('amsterdam', 'paris', 'rome') THEN 'europe-west1'
      END) STORED,
    owner_id UUID,
    creation_time TIMESTAMP,
    status STRING,
    current_location STRING,
    ext JSONB)
    LOCALITY REGIONAL BY ROW AS region;

CockroachDB will then assign a region to each row, based on the value of the region column. In this example, the region column is computed from the value of the city column.

Create a table with an identity column

Identity columns define a sequence from which to populate a column when a new row is inserted.

For example:

icon/buttons/copy
> CREATE TABLE bank (
    id UUID DEFAULT gen_random_uuid() PRIMARY KEY,
    order_index INT8 UNIQUE,
    balance INT8,
    payload STRING,
    numerical INT8 GENERATED BY DEFAULT AS IDENTITY (INCREMENT 1 MINVALUE 0 START 0)
);

CockroachDB creates a sequence to use as the numerical column's default value.

icon/buttons/copy
> SHOW SEQUENCES;
  sequence_schema |   sequence_name
------------------+---------------------
  public          | bank_numerical_seq
(1 row)
icon/buttons/copy
> SHOW COLUMNS FROM bank;
  column_name | data_type | is_nullable |                 column_default                 | generation_expression |            indices             | is_hidden
--------------+-----------+-------------+------------------------------------------------+-----------------------+--------------------------------+------------
  id          | UUID      |    false    | gen_random_uuid()                              |                       | {bank_order_index_key,primary} |   false
  order_index | INT8      |    true     | NULL                                           |                       | {bank_order_index_key,primary} |   false
  balance     | INT8      |    true     | NULL                                           |                       | {primary}                      |   false
  payload     | STRING    |    true     | NULL                                           |                       | {primary}                      |   false
  numerical   | INT8      |    false    | nextval('public.bank_numerical_seq'::REGCLASS) |                       | {primary}                      |   false
(5 rows)

When a new row is added to the table, CockroachDB populates the numerical column with the result of the nextval('bank_numerical_seq') built-in function.

icon/buttons/copy
> INSERT INTO bank (order_index, balance) VALUES (1, 0), (2, 0), (3, 0);
icon/buttons/copy
> SELECT id, order_index, balance, numerical FROM bank ORDER BY order_index;
                   id                  | order_index | balance | numerical
---------------------------------------+-------------+---------+------------
  0b533801-052e-4837-8e13-0ef2fa6f8883 |           1 |       0 |         0
  9acc87ad-ced6-4744-9397-6a081a7a9c79 |           2 |       0 |         1
  4f929768-e3da-49cf-b8a6-5381e47953ca |           3 |       0 |         2
(3 rows)

The numerical column in this example follows the BY DEFAULT rule. According to this rule, if the value of an identity is explicitly updated, the sequence value is overwritten:

icon/buttons/copy
> UPDATE bank SET numerical = 500 WHERE id = '0b533801-052e-4837-8e13-0ef2fa6f8883';
icon/buttons/copy
> SELECT id, order_index, balance, numerical FROM bank ORDER BY order_index;
                   id                  | order_index | balance | numerical
---------------------------------------+-------------+---------+------------
  0b533801-052e-4837-8e13-0ef2fa6f8883 |           1 |       0 |       500
  9acc87ad-ced6-4744-9397-6a081a7a9c79 |           2 |       0 |         1
  4f929768-e3da-49cf-b8a6-5381e47953ca |           3 |       0 |         2
(3 rows)

Inserting explicit values does not affect the next value of the sequence:

icon/buttons/copy
> INSERT INTO bank (order_index, balance, numerical) VALUES (4, 0, 3);
icon/buttons/copy
> INSERT INTO bank (order_index, balance) VALUES (5, 0);
icon/buttons/copy
> SELECT id, order_index, balance, numerical FROM bank ORDER BY order_index;
                   id                  | order_index | balance | numerical
---------------------------------------+-------------+---------+------------
  0b533801-052e-4837-8e13-0ef2fa6f8883 |           1 |       0 |       500
  9acc87ad-ced6-4744-9397-6a081a7a9c79 |           2 |       0 |         1
  4f929768-e3da-49cf-b8a6-5381e47953ca |           3 |       0 |         2
  9165ab56-c41c-4a8a-ac0c-15e82243dc4d |           4 |       0 |         3
  40b5620f-cd56-4c03-b0ab-b4a63956dfe6 |           5 |       0 |         3
(5 rows)
Note:

If the numerical column were to follow the ALWAYS rule instead, then the sequence values in the column could not be overwritten.

Create a table with data excluded from backup

New in v22.1: In some situations, you may want to exclude a table's row data from a backup. For example, a table could contain high-churn data that you would like to garbage collect more quickly than the incremental backup schedule for the database or cluster that will hold the table. You can use the exclude_data_from_backup = true parameter with CREATE TABLE to mark a table's row data for exclusion from a backup:

icon/buttons/copy
CREATE TABLE promo_codes (
    code VARCHAR NOT NULL,
    description VARCHAR NULL,
    creation_time TIMESTAMP NULL,
    expiration_time TIMESTAMP NULL,
    rules JSONB NULL,
    CONSTRAINT promo_codes_pkey PRIMARY KEY (code ASC)
  )
WITH (exclude_data_from_backup = true);

To set exclude_data_from_backup on an existing table, see the Exclude a table's data from backups example.

See also


Yes No
On this page

Yes No