I wrote the playground and I should link to your website in my docs. This is neat.
Remember the famous https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule?
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
CEL is a well specified, reasonably fast "embeddable" language with familiar syntax. I'm sure there are other languages that fits the description though.
It also allows using CEL in ValidatingAdmissionPolicies: - https://kubernetes.io/docs/reference/access-authn-authz/vali...
With OPA you can easily create policies that take tens, hundreds or even thousands of millisecond.
That comes at the expense of a lot of power though, so much of the complex logic that you can write in OPA simply isn't achievable in CEL.
But most use cases are treating CEL as a user provided config, which requires runtime parsing and execution.
The property you really want is "can be cancelled after a certain amount of compute time - ideally a deterministic amount", and you can obviously do that with Turing complete languages.
I have a business analytics friend that knows SQL because it's part of his workflows.
But Excel, Notion, Power BI, and other low/no-code tools all have their own data filtering and transformation languages (or dialects). He'd rather spend his time learning more about his line of business, than an aspect of yet another cloud tool that gets forced on him.
no Doom running on cel.
I recently wanted to expose some basic user auto tagging/labeling based on the json data.
I chose cel, over python, SQL because I could just import the runtime in C++, or any language that implements it (python, js etc..)
Safely running a sandboxed python execution engine is significantly more effort and lower performance.
At this cel excels.
Where it didn't was user familiarity and when the json data itself was complex.
"Guaranteed to terminate" actually means "guaranteed to terminate in finite but possibly arbitrarily large time" which is really not a useful property.
There's no practical difference between a filter that might take 1 billion years to run and one that might take more than a billion years.
https://github.com/google/cel-spec/blob/master/doc/langdef.m...
And your service puts an upper bound on input size and cel expression size. (True for all practical applications.)
You can actually get a guarantee tha t you can't construct a billion year expression. And even guarantee that all expressions will evaluate in let's say 60 secs.
Turing completeness by itself does not guarantee this but it is a necessary prerequisite for these guarantees.
There is a practical solution to it called “metering”, like gas mechanism in Ethereum’s EVM or cost calculation for complex GraphQL queries.
In the common use-cases for CEL that I've seen, you don't want to skip evaluation and fail open or closed arbitrarily. That can mean things like "abusive user gets access to data they should not be allowed to access because rule evaluation was skipped".
You also may have tons of rules and be evaluating them very often, so speed is important.