Author: Alexey Milovidov, 2026-02-26.
1. (50 min) What's new in ClickHouse 26.2.
2. (10 min) Q&A.
ClickHouse Winter Release.
โ 25 new features ๐งค
โ 43 performance optimizations ๐ท
โ 183 bug fixes โ
And a new system.primes table.
:) SELECT * FROM primes(10)
โโprimeโโ
1. โ 2 โ
2. โ 3 โ
3. โ 5 โ
4. โ 7 โ
5. โ 11 โ
6. โ 13 โ
7. โ 17 โ
8. โ 19 โ
9. โ 23 โ
10. โ 29 โ
โโโโโโโโโ
Demo
Developer: Nihal Miaji.
For introspection:
— system.tokenizers
— system.user_defined_functions
— system.jemalloc_stats
— system.jemalloc_profile_text
— system.fail_points
Demo
Developers: Robert Schulze, Xu Jia, Antonio Andelic, Pedro Ferreira.
Extended table aliases in JOINs:
SELECT *
FROM (SELECT 1) AS t(a)
JOIN (SELECT 1) AS u(b)
ON a = b
— that's a weird feature. Now we have it.
Developer: Yarik Briukhovetskyi.
— colorSRGBToOKLCH — 25.7
— colorOKLCHToSRGB — 25.7
— colorSRGBToOKLAB — new in 26.2
— colorOKLABToSRGB — new in 26.2
OKLAB — perceptually uniform color space
Useful for: doing arithmetic on colors, blending colors, generating gradients, programmatically choosing colors.
Developer: Pranav Tiwari.
CREATE TABLE pageviews (event_time DateTime, ...)
SETTINGS add_minmax_index_for_temporal_columns = 1;
25.1:
— add_minmax_index_for_numeric_columns
— add_minmax_index_for_string_columns
26.2:
— add_minmax_index_for_temporal_columns
Developer: Michael Jarrett.
Enable it in clickhouse-server configuration:
$ cat config.d/keeper.yaml
keeper:
pass_opentelemetry_tracing_context: true
The data is saved to opentelemetry_span_log with dynamic sample rate.
It tracks both client-side (clickhouse-server)
and server-side (clickhouse-keeper) spans.
Developer: Michael Stetsyuk.
When the table is sorted by expressions on columns:
CREATE TABLE pageviews (user_id UInt64, ...)
ORDER BY cityHash64(user_id)
In 25.1, this query didn't use index:
SELECT * FROM pageviews WHERE user_id = 123456;
You could make it use the index as follows:
SELECT * FROM pageviews WHERE user_id = 123456
AND cityHash64(user_id) = cityHash64(123456);
In 26.2, both queries use the index!
Developer: Nihal Z. Miaji.
When the table is sorted by expressions on columns:
CREATE TABLE pageviews
(
user_id UInt64,
region LowCardinality(Nullable(String)),
lat Nullable(Float64),
lon Nullable(Float64),
INDEX idx_region (region) TYPE set(100),
INDEX idx_lat (lat) TYPE minmax,
INDEX idx_lon (lon) TYPE minmax,
);
SELECT user_id FROM pageviews
WHERE coalesce(region, 'n/a') = 'Amsterdam'
AND ifNull(lat, 0) BETWEEN -10 AND 10
Developers: Nihal Z. Miaji.
By parallelizing processing of non-joined rows:
Developer: Yarik Briukhovetskyi.
For the JSON data type:
Developer: Pavel Kruglov.
For the case without GROUP BY:
Developer: Raรบl Marรญn.
Speed-up of the calculation of minmax indices:
Developer: Raรบl Marรญn.
Developers: Aaron Knudtson.
Secure interactive authentication in clickhouse-client,
with Google Authenticator, 1Password, Okta, and similar.
$ cat users.d/totp_user.yaml
users:
totp_user:
password_sha256_hex: ecd71870d1963316a97e3ac3408c9835ad8cf0f3c1bc703527c30265534f75ae
time_based_one_time_password:
secret: 6QNS5E7R35MDN62X7FV4LCUTY3SXRP3V
period: 30
digits: 6
algorithm: SHA1
networks:
ip: '::/0'
profile: default
quota: default
Demo
Developers: Denis Kamenskii, Vladimir Cherkasov.
CREATE DATABASE db_name
ENGINE = Atomic
SETTINGS lazy_load_tables = 1;
With this setting, the server does not load tables at startup,
tables will be loaded on the first use of each.
Developers: XiaoHuanLin.
Let's insert infinite stream into ClickHouse:
curl -sS --globoff -H 'Accept: application/json' --no-buffer \
"https://stream.wikimedia.org/v2/stream/recentchange" |
clickhouse-client --query "
INSERT INTO wikipedia_edits FORMAT JSONAsObject" \
--min_insert_block_size_rows=0 --min_insert_block_size_bytes=0 \
--input_format_max_block_wait_ms 1000 --input_format_connection_handling 1
It works. But how often the data will be flushed into a table?
— depends on the data rate and min_insert_block_size.
ClickHouse 26.2 introduces a new setting, input_format_max_block_wait_ms — lets you define the flush interval by time instead of the block size.
Developers: Mostafa Mohamed Salah.
CREATE DATABASE biglake
ENGINE = DataLakeCatalog(
'https://biglake.googleapis.com/iceberg/v1/restcatalog')
SETTINGS catalog_type = 'biglake',
google_adc_credentials_file =
'/home/ubuntu/.config/gcloud/application_default_credentials.json',
warehouse = 'gs://biglake-public-nyc-taxi-iceberg'
Demo
Developer: Konstantin Vedernikov.
SELECTs from Iceberg now use the PREWHERE optimization.
Iceberg tables support ALTER TABLE RENAME COLUMN.
INSERTs into Iceberg are production ready!
Developers: Konstantin Vedernikov, murphy-4o.
A data type for vector embeddings,
that allows tuning the search precision at runtime.
CREATE TABLE vectors (
id UInt64, name String, ...
vec QBit(BFloat16, 1536)
) ORDER BY ();
SELECT id, name FROM vectors
ORDER BY L2DistanceTransposed(vector, target, 10)
LIMIT 10;
Developer: Raufs Dunamalijevs.
A full-text search index in ClickHouse.
— In development since 2022
— first prototype in 2023 (by Harry Lee and Larry Luo)
— experimental in 25.9
— beta in 25.12
— production in 26.2
Developers: Anton Popov, Elmi Ahmadov, Jimmy Aguilar Mena.
CREATE TABLE text_log
(
message String,
...
INDEX inv_idx(message)
TYPE text(tokenizer = 'splitByNonAlpha')
GRANULARITY 128
)
ENGINE = SharedMergeTree ORDER BY id;
SELECT ... WHERE hasToken(message, 'DDLWorker');
SELECT ... WHERE hasAllTokens(message, ['peak', 'memory']);
SELECT ... WHERE hasAnyTokens(message, tokens('01442_merge_detach_attach'));
Developers: Anton Popov, Elmi Ahmadov, Jimmy Aguilar Mena.
Developers: Anton Popov, Elmi Ahmadov, Jimmy Aguilar Mena.
ADBC driver for ClickHouse (Arrow Database Connectivity).
MCP Server: added auth support for secure AI agent connections.
Language drivers:
— Python: native async client, Pandas 3 compatibility.
— Go: structured logging, BFloat16 and Time/Time64 types.
— C#: v1.0 with JSON column support, parameter handling, bulk copy.
Updates for Apache Spark, Apache Flink, Kafka Connect,
DBT Core, and Fivetran integrations.
MongoDB CDC in Public Beta in ClickPipes.
Monitor your Postgres
with ClickHouse.
Open-source (Apache 2.0) Postgres extension.
Captures every event
with minimal overhead.
npx skills add clickhouse/agent-skills
https://github.com/ClickHouse/ClickHouse/blob/master/AI_POLICY.md
You can use AI for ClickHouse development. We welcome and embrace AI usage, as well as research and experiments with the frontier AI models and novel methods of AI applications for software engineering.
You don't have to disclose your usage of AI. You can tell about it, share your experience, and show the methods, but it is not required. AI is a normal developer's tool, similar to an IDE, an OS, or a keyboard. We don't judge your work on the basis of the usage of AI, but we recommend taking efforts to filter out slop before sending a pull request; otherwise, it may negatively affect your reputation as an engineer.
— ๐บ๐ธ Seattle, Feb 26
— ๐ฎ๐ณ Bengaluru, Feb 28
— ๐ฆ๐บ Melbourne: Data Streaming World, Mar 5
— ๐บ๐ธ Los Angeles, Mar 6
— ๐ธ๐ฌ Singapore: HackOMania, Mar 7
— ๐บ๐ธ San Francisco: Women+ in Open Source, Mar 9
— ๐ฏ๐ต Tokyo, Mar 9
— ๐ง๐ท Sรฃo Paulo, Mar 10
— ๐บ๐ธ San Francisco, Mar 11
— ๐บ๐ธ Pittsburgh: Apache Iceberg Meetup, Mar 12
— ๐บ๐ธ New York, Mar 19
— ๐ณ๐ฑ Amsterdam: Launch & Learn, Mar 31
— pg_clickhouse — the fastest analytics for Postgres.
— AI-powered migrations from Postgres to ClickHouse
— How Cloudflare uses ClickHouse at quadrillion-row scale
— Lovable loves ClickHouse (why?)
— Langfuse ended up loving it too
— How Buildkite transformed test analytics with ClickHouse
— Is it over for metrics?
— Monitoring Temporal Cloud with ClickStack
— BigQuery connector for ClickPipes
— Wix built AI-driven incident response on ClickHouse
— How ClickHouse observes one of the largest Kafka deployments on earth