# Infinite precision

Big decimal support.

Infinite precision refers to Space and Time's unique support for the big decimal data type.

Legacy databases typically only support data types with 18-38 digits, but lots of numbers in Web3 range into the 60+ digit territory. Things like wallet balances or gas fees all involve numbers of this size, which makes it really difficult to work with blockchain data in a traditional database. In order to run computations against these big numbers, you have to round to the number of digits supported. In other words, you lose precision.

## Space and Time big decimal support

Space and Time has extended the decimal data type to support numbers far beyond the digit limit in other databases. With Space and Time, you can define big decimals with a simple `CREATE`

statement. All of the complexity is abstracted away.

All you have to do is create a table and define a column with chosen precision. We handle the rest. Here's how the column definition looks:

```
decimal(<digits>, <digits after decimal>)
```

### Examples:

- Define a column as decimal(78, 0), and you get an integer with 78 digit precision (0 digits after the decimal point)
- Define a column as decimal(300, 250), and you get an integer with 300 digit precision (250 available after decimal point)

## Infinite precision

This allows for, essentially, infinite precision for big number computations. Unlike other databases, Space and Time lets you perform operations on big numbers/decimals without losing precision, enabling more accurate analytics against blockchain data.

## Supported Operations

- Binary arithmetic: [+ , -, * , /, %]
- Comparison operators: [>, < , <=, >=, <=>]
- Grouping operators: [Avg, Min, Max, Count, Sum]
- Null check

Note: For binary operations, if the leading operand’s data type is not Big Decimal, the query engine will try to truncate and round up the big decimal as well, resulting in loss of precision.

e.g. The following expression will cause precision loss as 1 is the leading operand and is not a big decimal:

```
1 - 3000000089999919191919190000000000000000.2000000
```

To avoid losing precision, use cast:

```
cast(1, decimal(48,7) - 3000000089999919191919190000000000000000.2000000)
```

## Limitations

**Subqueries: **Currently, there is a limitation with the Big Decimal data type for subqueries. Work is underway to improve this. For now, see below:

In the following query if “l_quanity” is a Big Decimal type, it causes a query analysis failure:

```
select sum(l_quantity) from lineitem where L_ORDERKEY in (
select
l_orderkey
from
lineitem
group by
l_orderkey having
sum(l_quantity) > 300
)
```

To avoid the problem, you can explicitly cast the column in the subquery:

```
select sum(l_quantity) from lineitem where L_ORDERKEY in (
select
l_orderkey
from
lineitem
group by
l_orderkey having
sum(cast(l_quantity as decimal(78, 20)) > 300
)
```

Updated 16 days ago