With our latest release, among other goodies, we enable interactive analysis on our data clean rooms to accelerate collaborations without any security compromise.
We continuously get your feedback that tying a dataset to a specific computation has tremendous benefits for security, privacy, and control but limits the user experience for analysts that are used to working with open data and interactive workflows. So we thought: what if we could give similar interactivity but keeping the data owner in control? Enter, data clean room computation requests. Now our users are able to embed new computations in the DCR also after it has been published. But only if the data owner approves them after auditing the code and the results. This way, there is no more need to create a new DCR in order to just add an extra computation, but at the same time, we can keep our guarantee to keep data owners in full control at all times.
Moving the privacy filter at query level
Now you are able to select different privacy settings depending on the individual query. This allows for more granular control ranging from the highest aggregation to individual-level output. Previously that was only possible on the Data clean room level.
Join and synthesize
With our previous release’s Synthetic Data Generator, you can create data that looks like the original and holds similar statistical properties as the original, but without any sensitive information in them. And while this alone is a very powerful tool, our users wanted to do more. So with this release we are enabling you to create synthetic data not only from datasets but also from SQL results. So now you can join and analyse the original sensitive data of multiple data owners inside our Data Clean Rooms and then create a synthetic copy of that joined data.
Those of you that used - or have seen our demos - of the previous version know that the output of the synthetic data node can be downloaded and used in your own environment. However, what if you want to quickly test a computation or even test if your local model will work in the Decentriq data clean room with the synthetic data? That is what the development environment is for. Its a data scratch space where you can run all your computations on the synthetic data or any data you are the owner of. And when you are done, you can even turn a computation to a “Computation request” with a single button.
You are now able to set which (if any) datasets are compulsory in a data clean room. That way you ensure that the analysts will not be able to infer information that you don't want them to by running the queries earlier than they should.
We didn’t only add in this version, we also did our spring cleaning and removed several borders and other unnecessary elements which are together improving the usability of our product.
If you would like to learn more about these updates, let us know and we’ll be more than happy to walk you through.
We’re always looking for ways to improve our data clean rooms to make it even easier for you to unlock new value from sensitive data assets.