Note: much of this is moved to grafana/dataplane#56
Dataplane adds a data typing layer on top of data frames. By analogy Dataplane Types are to dataframes as typescript is to javascript.
Specifically, a property is added to the dataframe to indicate its type with is made up of a kind and a format, for example "Timeseries Wide".
There are two properties attached to each data frame for this, frame.meta.type and frame.meta.typeversion.
The contract is a written set of rules for each dataplane data type. It says how frames should be formed by producers of data (datasources, transformations) and how consumers like dashboard, alerting, and apps can expect that data.
In short it describes the rules for valid and invalid schemas for each type.
The main point is self-interoperability within Grafana between datasources and features like Dashboards and alerting.
For devs: Data source authors know what type of frames to output, and authors of features know what to expect for their input. This makes the platform scalable and development more efficient and less frustrating due to incompatibilities.
For users: They should see a side effect of things being more reliable and working as expected.
So compatibility becomes about supporting data types and not specific features and data sources. For example, if datasource produces type "A", and alerting accepts type "A" and certain visualizations. That source should work with alerting and those visualizations.
Two main reasons, open tent and history.
The Open tent nature of our software means that there are different formats of data, in order to support those more natively, there are set of formats where generally one of them should be close to what a datasource produces, so the data source can relay the information with minimal distortion.
Dataframes existed before the dataplane types did. So the Dataplane formats were designed to be close to the coventions for data that had are evolved while trying remove places where things were inconsistent, and make decision one way or the other.
Consumers of data have to infer the type from looking at the data returned. This has a few problems:
- Users get confused about how to write queries to work with different things
- Error messages can become seemingly unrelated to what users are doing
- Different features guess differently (e.g. alerting vs visualizations), hard for users and devs to know what to send
- "Guessing code" (on the consumer side) gets more convoluted over time as more exceptions are added for various data sources.
You can still return dataframes and work with consumers to support them. The point of dataplane types is for well established types or proposing types to be established. You can set the frame.meta.type to whatever string you want, and the version to whatever x.x.
So dataplane is designed to allow data types to grow into maturity in a way not limit new innovation.
Besides interoperability:
- We can suggest what actions can be taken with the data if the type is reliably known. So for example, we can suggest creating alert rules or certain visualizations in dashboards that work well that type.
- We can suggest transformations that get you from the current type to another type support additional actions
No, and this would be a very helpful evolution.
Right now everything is runtime, or in code. If had a schema of query types, and the data types they returned. We could generate compatibility matrices of data sources and features.
The normal pattern is to have a drop down in DS query UI to assert the query type.
This often appears as "format as" in the UI. This data source query information can then be used my the data source to produce a dataplane compatible type for the response.
Perhaps, but not at the feature/consumer level. The data source knows more about the data that comes from the system behind the data source. So this more ends up confusing the user.
Rather, if we want to help the user, auto-determine the type should be done with in the DS in a way that fits with the constraints of the datasource.
- Many visualizations work natively with the labeled types (
multiandwide). (TODO: more discovery around what works with what viz). The Long types will naturally work in the table visualization. But presentlylongwill not work with the timeseries viz until a user adds theprepare time seriestransform (from Grafana Transformations). - Grafana Managed Alerts and Recording rules via Server Side Expressions (SSE) support
multi,wide, andlongformat formats for bothnumericandtimeserieskinds. Serve side expressions converts the responses to the corresponding kind.numericcan be directly alerted on,timeserieswill need a SSE reduce operation (which creates a numeric result) to be alerted on.
- Prometheus (and amamazon / azure variants)
- Loki
- Azure Monitor and Azure Data Explorer
- Bigquery
- Clickhouse
- Cloudlflare
- Databricks
- New Relic
- Oracle, Postgres, MySQL
- Influx
- Snowflake
- Victoria metrics
Yes, see https://github.com/grafana/dataplane/tree/main/examples/data for numeric and time series examples. Each json file is an array of frame(s) as they are encoded in JSON.

[ { "schema": { "meta": { "type": "numeric-wide", // SQL expressions need this property set for labeled metric data "typeVersion": [0, 1], // optional for SQL Expressions (so can be default 0.0) // TypeVersion > 0.0 should make other SSE operations more deterministic, // but if not a new DS, safest path is to do that as a separate task. // ... } }, "fields": [ // ... ] } ]