🧱DeltaLake (via Databricks)
How does Rasgo work with Databricks?
Rasgo is a metadata-only product, meaning all of your actual data stay in your data warehouse and Rasgo queries it there via dynamically generating SQL.
Rasgo performs reads-only operations in your Databricks env:
Rasgo reads the information schema for tables and columns it has access to
Rasgo dynamically generates and executes SQL on behalf of the user to analyze data
Connecting to Databricks
IP Restrictions
Rasgo will always connect to your account from these IP addresses. Make sure to whitelist them if you have networking restrictions enabled.
IP Address |
---|
54.84.138.60 |
54.84.66.109 |
Credentials
Rasgo needs service credentials to authenticate and execute SQL queries in your account.
When connecting Rasgo to your Databricks account, you'll need to provide this info:
Field | Description | Example |
---|---|---|
Server Hostname | The Databricks compute resource’s Server Hostname value. | abc.123.azuredatabricks.net |
HTTP Path | The Databricks compute resource’s HTTP Path value. | /sql/1.0/warehouses/abcde |
Access token | A Databricks personal access token | aeiou-1 |
Rasgo connects to Databricks using a personal access token by default. If you require a different connection method, please contact Rasgo for advanced connection support.
Synchronous Execution
Rasgo connects to your Databricks account using a python connector. This limits the response time of individual queries to 30 seconds. If a query does not return results in under 30 seconds, the connection will be terminated and Rasgo will return an error message to the end user.
Success!
Configuration is complete! You're ready to start using Rasgo.
Last updated