We provide access to warehouse configuration through the
~/.whale/config/connections.yaml file. The accepted key/value pairs, however, are warehouse-specific and, as such, are most easily added through the
wh init workflow. However, in the case where this needs to be done manually, refer to the following warehouse-specific documentation below.
---name: ~metadata_source: ~database: ~ # For all but bigquery
Unique warehouse name. This will be used to name the subdirectory within
~/.whale/metadata that stores metadata and UGC for each table.
metadata_source The type of connection that this yaml section describes. These are case sensitive and can be one of the following:
database Specify a string here to restrict the scraping to a particular database under your connection. Specifying this modifies the SQLAlchemy conn string used for connection, using this string as the "database" field (in ANSI SQL, this is known as the "catalog"). See the SQLAlchemy docs for more details.
---name:metadata_source: Bigquerykey_path: /Users/robert/gcp-credentials.jsonproject_credentials: # Only one of key_path or project_credentials neededproject_id:
Only one of
project_credentials are required.
To do: Unlike Bigquery, we currently don't allow you to specify
---name: whatever-you-want # Optionalmetadata_source: Glue
name parameter will place all of your glue documentation within a separate folder, as is done with the other extractors. But because Glue is already a metadata aggregator, this may not be optimal, particularly if you connect to other warehouses with whale directly. In this case, the
name parameter can be omitted, and the table stubs will reside within subdirectories named after the underlying warehouse/instance.
For example, with
name, your files will be organized like this:
name, your files will be stored like this:
---name:metadata_source: HiveMetastoreuri:port:username: # Optionalpassword: # Optionaldialect: postgres # postgres/mysql/etc. This is the dialect used in the SQLAlchemy conn string.database: hive # The database within this connection where the metastore lives. This is usually "hive".
For more information the
dialect field, see the SQLAlchemy documentation.
We provide support to scrape metadata from Amundsen's neo4j backend. However, by default we do not install the neo4j drivers within our installation virtual environment. To use this, you must install using
make && make install, then
pip install neo4j-driver within the virtual environment located at
---name:metadata_source: Neo4juri:port:username: # Optionalpassword: # Optional
---name:metadata_source: Postgresuri:port:username: # Optionalpassword: # Optional
---name:metadata_source: Prestouri:port:username: # Optionalpassword: # Optional
---name:metadata_source: Redshifturi:port:username: # Optionalpassword: # Optional
---name:metadata_source: Snowflakeuri:port:username: # Optionalpassword: # Optionalrole: # Optional
---name:metadata_source: splicemachineuri: jdbc-cluster114-splice-prod.splicemachine.io # an exampleusername:password:
We also support use of custom scripts that handle the metadata scraping and dumping of this data into local files (in the
metadata subdirectory) and manifests (in the
manifests subdirectory). For more information, see Custom extraction.
---build_script_path: /path/to/build_script.pyvenv_path: /path/to/venvpython_binary_path: /path/to/binary # Optional