From d10b7fe75d9c8ec7a3211b9474ae9d91f6f2f13e Mon Sep 17 00:00:00 2001 From: Euan Date: Tue, 28 Apr 2026 09:53:08 +0200 Subject: [PATCH 1/2] (docs): removing shell indicators --- docs/cloud/features/scheduler/airflow.md | 2 +- docs/cloud/features/scheduler/dagster.md | 2 +- docs/cloud/features/security/single_sign_on.md | 12 ++++++------ docs/cloud/features/xdb_diffing.md | 2 +- docs/concepts/state.md | 11 ++++++----- docs/guides/configuration.md | 4 ++-- docs/guides/migrations.md | 2 +- docs/integrations/dbt.md | 12 ++++++------ docs/integrations/dlt.md | 14 +++++++------- docs/integrations/engines/bigquery.md | 10 +++++----- docs/reference/configuration.md | 16 ++++++++-------- 11 files changed, 44 insertions(+), 43 deletions(-) diff --git a/docs/cloud/features/scheduler/airflow.md b/docs/cloud/features/scheduler/airflow.md index 11e82769dd..653d3ca474 100644 --- a/docs/cloud/features/scheduler/airflow.md +++ b/docs/cloud/features/scheduler/airflow.md @@ -35,7 +35,7 @@ Start by installing the `tobiko-cloud-scheduler-facade` library in your Airflow Make sure to include the `[airflow]` extra in the installation command: ``` bash -$ pip install tobiko-cloud-scheduler-facade[airflow] +pip install tobiko-cloud-scheduler-facade[airflow] ``` !!! info "Mac Users" diff --git a/docs/cloud/features/scheduler/dagster.md b/docs/cloud/features/scheduler/dagster.md index 338bed2572..054cc465c0 100644 --- a/docs/cloud/features/scheduler/dagster.md +++ b/docs/cloud/features/scheduler/dagster.md @@ -48,7 +48,7 @@ dependencies = [ And then install it into the Python environment used by your Dagster project: ```sh -$ pip install -e '.[dev]' +pip install -e '.[dev]' ``` ### Connect Dagster to Tobiko Cloud diff --git a/docs/cloud/features/security/single_sign_on.md b/docs/cloud/features/security/single_sign_on.md index 716fb10589..df2de91735 100644 --- a/docs/cloud/features/security/single_sign_on.md +++ b/docs/cloud/features/security/single_sign_on.md @@ -145,7 +145,7 @@ Here is what you will see if you are accessing Tobiko Cloud via Okta. Click on t You can see what the status of your session is with the `status` command: ``` bash -$ tcloud auth status +tcloud auth status ``` @@ -156,7 +156,7 @@ $ tcloud auth status Run the `login` command to begin the login process: ``` bash -$ tcloud auth login +tcloud auth login ``` ![tcloud_login](./single_sign_on/tcloud_login.png) @@ -183,11 +183,11 @@ Current Tobiko Cloud SSO session expires in 1439 minutes In order to delete your session information you can use the log out command: ``` bash -> tcloud auth logout -Logged out of Tobiko Cloud +tcloud auth logout +# Logged out of Tobiko Cloud -> tcloud auth status -Not currently authenticated +tcloud auth status +# Not currently authenticated ``` ![tcloud_logout](./single_sign_on/tcloud_logout.png) diff --git a/docs/cloud/features/xdb_diffing.md b/docs/cloud/features/xdb_diffing.md index fbdeb52ca5..154589213d 100644 --- a/docs/cloud/features/xdb_diffing.md +++ b/docs/cloud/features/xdb_diffing.md @@ -47,7 +47,7 @@ Then, specify each table's gateway in the `table_diff` command with this syntax: For example, we could diff the `landing.table` table across `bigquery` and `snowflake` gateways like this: ```sh -$ tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|landing.table' +tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|landing.table' ``` This syntax tells SQLMesh to use the cross-database diffing algorithm instead of the normal within-database diffing algorithm. diff --git a/docs/concepts/state.md b/docs/concepts/state.md index ea5391ec20..236d2399e7 100644 --- a/docs/concepts/state.md +++ b/docs/concepts/state.md @@ -92,7 +92,7 @@ The state file is a simple `json` file that looks like: You can export a specific environment like so: ```sh -$ sqlmesh state export --environment my_dev -o my_dev_state.json +sqlmesh state export --environment my_dev -o my_dev_state.json ``` Note that every snapshot that is part of the environment will be exported, not just the differences from `prod`. The reason for this is so that the environment can be fully imported elsewhere without any assumptions about which snapshots are already present in state. @@ -102,7 +102,7 @@ Note that every snapshot that is part of the environment will be exported, not j You can export local state like so: ```bash -$ sqlmesh state export --local -o local_state.json +sqlmesh state export --local -o local_state.json ``` This essentially just exports the state of the local context which includes local changes that have not been applied to any virtual data environments. @@ -174,10 +174,11 @@ If your project has [multiple gateways](../guides/configuration.md#gateways) wit ```bash # state export -$ sqlmesh --gateway state export -o state.json - +sqlmesh --gateway state export -o state.json +``` +```bash # state import -$ sqlmesh --gateway state import -i state.json +sqlmesh --gateway state import -i state.json ``` ## Version Compatibility diff --git a/docs/guides/configuration.md b/docs/guides/configuration.md index d6d4f20c11..2f5f1f6e53 100644 --- a/docs/guides/configuration.md +++ b/docs/guides/configuration.md @@ -269,7 +269,7 @@ gateways: We can override the `dummy_pw` value with the true password `real_pw` by creating the environment variable. This example demonstrates creating the variable with the bash `export` function: ```bash -$ export SQLMESH__GATEWAYS__MY_GATEWAY__CONNECTION__PASSWORD="real_pw" +export SQLMESH__GATEWAYS__MY_GATEWAY__CONNECTION__PASSWORD="real_pw" ``` After the initial string `SQLMESH__`, the environment variable name components move down the key hierarchy in the YAML specification: `GATEWAYS` --> `MY_GATEWAY` --> `CONNECTION` --> `PASSWORD`. @@ -1492,7 +1492,7 @@ Example enabling debug mode for the CLI command `sqlmesh plan`: === "Bash" ```bash - $ SQLMESH_DEBUG=1 sqlmesh plan + SQLMESH_DEBUG=1 sqlmesh plan ``` === "MS Powershell" diff --git a/docs/guides/migrations.md b/docs/guides/migrations.md index f65a34460a..222bc4cdb8 100644 --- a/docs/guides/migrations.md +++ b/docs/guides/migrations.md @@ -28,7 +28,7 @@ SQLMeshError: SQLMesh (local) is using version '1' which is behind '2' (remote). The project metadata can be migrated to the latest metadata format using SQLMesh's migrate command. ```bash -> sqlmesh migrate +sqlmesh migrate ``` Migration should be issued manually by a single user and the migration will affect all users of the project. diff --git a/docs/integrations/dbt.md b/docs/integrations/dbt.md index 5854236aa2..3a5b4f383f 100644 --- a/docs/integrations/dbt.md +++ b/docs/integrations/dbt.md @@ -19,19 +19,19 @@ Therefore, SQLMesh is packaged with multiple "extras," which you may optionally At minimum, using the SQLMesh dbt adapter requires installing the dbt extra: ```bash -> pip install "sqlmesh[dbt]" +pip install "sqlmesh[dbt]" ``` If your project uses any SQL execution engine other than DuckDB, you must install the extra for that engine. For example, if your project runs on the Postgres SQL engine: ```bash -> pip install "sqlmesh[dbt,postgres]" +pip install "sqlmesh[dbt,postgres]" ``` If you would like to use the [SQLMesh Browser UI](../guides/ui.md) to view column-level lineage, include the `web` extra: ```bash -> pip install "sqlmesh[dbt,web]" +pip install "sqlmesh[dbt,web]" ``` Learn more about [SQLMesh installation and extras here](../installation.md#install-extras). @@ -41,7 +41,7 @@ Learn more about [SQLMesh installation and extras here](../installation.md#insta Prepare an existing dbt project to be run by SQLMesh by executing the `sqlmesh init` command *within the dbt project root directory* and with the `dbt` template option: ```bash -$ sqlmesh init -t dbt +sqlmesh init -t dbt ``` This will create a file called `sqlmesh.yaml` containing the [default model start date](../reference/model_configuration.md#model-defaults). This configuration file is a minimum starting point for enabling SQLMesh to work with your DBT project. @@ -247,8 +247,8 @@ Instead, SQLMesh provides predefined time macro variables that can be used in th For example, the SQL `WHERE` clause with the "ds" column goes in a new jinja block gated by `{% if sqlmesh_incremental is defined %}` as follows: ```bash -> WHERE -> ds BETWEEN '{{ start_ds }}' AND '{{ end_ds }}' + WHERE + ds BETWEEN '{{ start_ds }}' AND '{{ end_ds }}' ``` `{{ start_ds }}` and `{{ end_ds }}` are the jinja equivalents of SQLMesh's `@start_ds` and `@end_ds` predefined time macro variables. See all [predefined time variables](../concepts/macros/macro_variables.md) available in jinja. diff --git a/docs/integrations/dlt.md b/docs/integrations/dlt.md index 7125510de9..d8d38cb864 100644 --- a/docs/integrations/dlt.md +++ b/docs/integrations/dlt.md @@ -8,7 +8,7 @@ SQLMesh enables efforless project generation using data ingested through [dlt](h To load data from a dlt pipeline into SQLMesh, ensure the dlt pipeline has been run or restored locally. Then simply execute the sqlmesh `init` command *within the dlt project root directory* using the `dlt` template option and specifying the pipeline's name with the `dlt-pipeline` option: ```bash -$ sqlmesh init -t dlt --dlt-pipeline dialect +sqlmesh init -t dlt --dlt-pipeline dialect ``` This will create the configuration file and directories, which are found in all SQLMesh projects: @@ -33,7 +33,7 @@ SQLMesh will also automatically generate models to ingest data from the pipeline The default location for dlt pipelines is `~/.dlt/pipelines/`. If your pipelines are in a [different directory](https://dlthub.com/docs/general-usage/pipeline#separate-working-environments-with-pipelines_dir), use the `--dlt-path` argument to specify the path explicitly: ```bash -$ sqlmesh init -t dlt --dlt-pipeline --dlt-path dialect +sqlmesh init -t dlt --dlt-pipeline --dlt-path dialect ``` ### Generating models on demand @@ -43,25 +43,25 @@ To update the models in your SQLMesh project on demand, use the `dlt_refresh` co - **Generate all missing tables**: ```bash -$ sqlmesh dlt_refresh +sqlmesh dlt_refresh ``` - **Generate all missing tables and overwrite existing ones** (use with `--force` or `-f`): ```bash -$ sqlmesh dlt_refresh --force +sqlmesh dlt_refresh --force ``` - **Generate specific dlt tables** (using `--table` or `-t`): ```bash -$ sqlmesh dlt_refresh --table +sqlmesh dlt_refresh --table ``` - **Provide the explicit path to the pipelines directory** (using `--dlt-path`): ```bash -$ sqlmesh dlt_refresh --dlt-path +sqlmesh dlt_refresh --dlt-path ``` #### Configuration @@ -83,7 +83,7 @@ Load package 1728074157.660565 is LOADED and contains no failed jobs After the pipeline has run, generate a SQLMesh project by executing: ```bash -$ sqlmesh init -t dlt --dlt-pipeline sushi duckdb +sqlmesh init -t dlt --dlt-pipeline sushi duckdb ``` Then the SQLMesh project is all set up. You can then proceed to run the SQLMesh `plan` command to ingest the dlt pipeline data and populate the SQLMesh tables: diff --git a/docs/integrations/engines/bigquery.md b/docs/integrations/engines/bigquery.md index b93d6837ed..4ea4d1d222 100644 --- a/docs/integrations/engines/bigquery.md +++ b/docs/integrations/engines/bigquery.md @@ -22,7 +22,7 @@ Follow the [quickstart installation guide](../../installation.md) up to the step Instead of installing just SQLMesh core, we will also include the BigQuery engine libraries: ```bash -> pip install "sqlmesh[bigquery]" +pip install "sqlmesh[bigquery]" ``` ### Install Google Cloud SDK @@ -35,19 +35,19 @@ Follow these steps to install and configure the Google Cloud SDK on your compute - Unpack the downloaded file with the `tar` command: ```bash - > tar -xzvf google-cloud-cli-{SYSTEM_SPECIFIC_INFO}.tar.gz + tar -xzvf google-cloud-cli-{SYSTEM_SPECIFIC_INFO}.tar.gz ``` - Run the installation script: ```bash - > ./google-cloud-sdk/install.sh + ./google-cloud-sdk/install.sh ``` - Reload your shell profile (e.g., for zsh): ```bash - > source $HOME/.zshrc + source $HOME/.zshrc ``` - Run [`gcloud init` to setup authentication](https://cloud.google.com/sdk/gcloud/reference/init) @@ -114,7 +114,7 @@ The output will look something like this: We've verified our connection, so we're ready to create and execute a plan in BigQuery: ```bash -> sqlmesh plan +sqlmesh plan ``` ### View results in BigQuery Console diff --git a/docs/reference/configuration.md b/docs/reference/configuration.md index b13438ee2d..625bf62150 100644 --- a/docs/reference/configuration.md +++ b/docs/reference/configuration.md @@ -277,33 +277,33 @@ Example enabling debug mode for the CLI command `sqlmesh plan`: === "Bash" ```bash - $ sqlmesh --debug plan + sqlmesh --debug plan ``` ```bash - $ SQLMESH_DEBUG=1 sqlmesh plan + SQLMESH_DEBUG=1 sqlmesh plan ``` === "MS Powershell" ```powershell - PS> sqlmesh --debug plan + sqlmesh --debug plan ``` ```powershell - PS> $env:SQLMESH_DEBUG=1 - PS> sqlmesh plan + $env:SQLMESH_DEBUG=1 + sqlmesh plan ``` === "MS CMD" ```cmd - C:\> sqlmesh --debug plan + sqlmesh --debug plan ``` ```cmd - C:\> set SQLMESH_DEBUG=1 - C:\> sqlmesh plan + set SQLMESH_DEBUG=1 + sqlmesh plan ``` ## Runtime Environment From 6a21e5f46c93b7c118b3bd7c75b97394b19c489f Mon Sep 17 00:00:00 2001 From: Euan Date: Tue, 28 Apr 2026 09:53:08 +0200 Subject: [PATCH 2/2] (docs): removing shell indicators Signed-off-by: Euan --- docs/cloud/features/scheduler/airflow.md | 2 +- docs/cloud/features/scheduler/dagster.md | 2 +- docs/cloud/features/security/single_sign_on.md | 12 ++++++------ docs/cloud/features/xdb_diffing.md | 2 +- docs/concepts/state.md | 11 ++++++----- docs/guides/configuration.md | 4 ++-- docs/guides/migrations.md | 2 +- docs/integrations/dbt.md | 12 ++++++------ docs/integrations/dlt.md | 14 +++++++------- docs/integrations/engines/bigquery.md | 10 +++++----- docs/reference/configuration.md | 16 ++++++++-------- 11 files changed, 44 insertions(+), 43 deletions(-) diff --git a/docs/cloud/features/scheduler/airflow.md b/docs/cloud/features/scheduler/airflow.md index 11e82769dd..653d3ca474 100644 --- a/docs/cloud/features/scheduler/airflow.md +++ b/docs/cloud/features/scheduler/airflow.md @@ -35,7 +35,7 @@ Start by installing the `tobiko-cloud-scheduler-facade` library in your Airflow Make sure to include the `[airflow]` extra in the installation command: ``` bash -$ pip install tobiko-cloud-scheduler-facade[airflow] +pip install tobiko-cloud-scheduler-facade[airflow] ``` !!! info "Mac Users" diff --git a/docs/cloud/features/scheduler/dagster.md b/docs/cloud/features/scheduler/dagster.md index 338bed2572..054cc465c0 100644 --- a/docs/cloud/features/scheduler/dagster.md +++ b/docs/cloud/features/scheduler/dagster.md @@ -48,7 +48,7 @@ dependencies = [ And then install it into the Python environment used by your Dagster project: ```sh -$ pip install -e '.[dev]' +pip install -e '.[dev]' ``` ### Connect Dagster to Tobiko Cloud diff --git a/docs/cloud/features/security/single_sign_on.md b/docs/cloud/features/security/single_sign_on.md index 716fb10589..df2de91735 100644 --- a/docs/cloud/features/security/single_sign_on.md +++ b/docs/cloud/features/security/single_sign_on.md @@ -145,7 +145,7 @@ Here is what you will see if you are accessing Tobiko Cloud via Okta. Click on t You can see what the status of your session is with the `status` command: ``` bash -$ tcloud auth status +tcloud auth status ``` @@ -156,7 +156,7 @@ $ tcloud auth status Run the `login` command to begin the login process: ``` bash -$ tcloud auth login +tcloud auth login ``` ![tcloud_login](./single_sign_on/tcloud_login.png) @@ -183,11 +183,11 @@ Current Tobiko Cloud SSO session expires in 1439 minutes In order to delete your session information you can use the log out command: ``` bash -> tcloud auth logout -Logged out of Tobiko Cloud +tcloud auth logout +# Logged out of Tobiko Cloud -> tcloud auth status -Not currently authenticated +tcloud auth status +# Not currently authenticated ``` ![tcloud_logout](./single_sign_on/tcloud_logout.png) diff --git a/docs/cloud/features/xdb_diffing.md b/docs/cloud/features/xdb_diffing.md index fbdeb52ca5..154589213d 100644 --- a/docs/cloud/features/xdb_diffing.md +++ b/docs/cloud/features/xdb_diffing.md @@ -47,7 +47,7 @@ Then, specify each table's gateway in the `table_diff` command with this syntax: For example, we could diff the `landing.table` table across `bigquery` and `snowflake` gateways like this: ```sh -$ tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|landing.table' +tcloud sqlmesh table_diff 'bigquery|landing.table:snowflake|landing.table' ``` This syntax tells SQLMesh to use the cross-database diffing algorithm instead of the normal within-database diffing algorithm. diff --git a/docs/concepts/state.md b/docs/concepts/state.md index ea5391ec20..236d2399e7 100644 --- a/docs/concepts/state.md +++ b/docs/concepts/state.md @@ -92,7 +92,7 @@ The state file is a simple `json` file that looks like: You can export a specific environment like so: ```sh -$ sqlmesh state export --environment my_dev -o my_dev_state.json +sqlmesh state export --environment my_dev -o my_dev_state.json ``` Note that every snapshot that is part of the environment will be exported, not just the differences from `prod`. The reason for this is so that the environment can be fully imported elsewhere without any assumptions about which snapshots are already present in state. @@ -102,7 +102,7 @@ Note that every snapshot that is part of the environment will be exported, not j You can export local state like so: ```bash -$ sqlmesh state export --local -o local_state.json +sqlmesh state export --local -o local_state.json ``` This essentially just exports the state of the local context which includes local changes that have not been applied to any virtual data environments. @@ -174,10 +174,11 @@ If your project has [multiple gateways](../guides/configuration.md#gateways) wit ```bash # state export -$ sqlmesh --gateway state export -o state.json - +sqlmesh --gateway state export -o state.json +``` +```bash # state import -$ sqlmesh --gateway state import -i state.json +sqlmesh --gateway state import -i state.json ``` ## Version Compatibility diff --git a/docs/guides/configuration.md b/docs/guides/configuration.md index d6d4f20c11..2f5f1f6e53 100644 --- a/docs/guides/configuration.md +++ b/docs/guides/configuration.md @@ -269,7 +269,7 @@ gateways: We can override the `dummy_pw` value with the true password `real_pw` by creating the environment variable. This example demonstrates creating the variable with the bash `export` function: ```bash -$ export SQLMESH__GATEWAYS__MY_GATEWAY__CONNECTION__PASSWORD="real_pw" +export SQLMESH__GATEWAYS__MY_GATEWAY__CONNECTION__PASSWORD="real_pw" ``` After the initial string `SQLMESH__`, the environment variable name components move down the key hierarchy in the YAML specification: `GATEWAYS` --> `MY_GATEWAY` --> `CONNECTION` --> `PASSWORD`. @@ -1492,7 +1492,7 @@ Example enabling debug mode for the CLI command `sqlmesh plan`: === "Bash" ```bash - $ SQLMESH_DEBUG=1 sqlmesh plan + SQLMESH_DEBUG=1 sqlmesh plan ``` === "MS Powershell" diff --git a/docs/guides/migrations.md b/docs/guides/migrations.md index f65a34460a..222bc4cdb8 100644 --- a/docs/guides/migrations.md +++ b/docs/guides/migrations.md @@ -28,7 +28,7 @@ SQLMeshError: SQLMesh (local) is using version '1' which is behind '2' (remote). The project metadata can be migrated to the latest metadata format using SQLMesh's migrate command. ```bash -> sqlmesh migrate +sqlmesh migrate ``` Migration should be issued manually by a single user and the migration will affect all users of the project. diff --git a/docs/integrations/dbt.md b/docs/integrations/dbt.md index 5854236aa2..3a5b4f383f 100644 --- a/docs/integrations/dbt.md +++ b/docs/integrations/dbt.md @@ -19,19 +19,19 @@ Therefore, SQLMesh is packaged with multiple "extras," which you may optionally At minimum, using the SQLMesh dbt adapter requires installing the dbt extra: ```bash -> pip install "sqlmesh[dbt]" +pip install "sqlmesh[dbt]" ``` If your project uses any SQL execution engine other than DuckDB, you must install the extra for that engine. For example, if your project runs on the Postgres SQL engine: ```bash -> pip install "sqlmesh[dbt,postgres]" +pip install "sqlmesh[dbt,postgres]" ``` If you would like to use the [SQLMesh Browser UI](../guides/ui.md) to view column-level lineage, include the `web` extra: ```bash -> pip install "sqlmesh[dbt,web]" +pip install "sqlmesh[dbt,web]" ``` Learn more about [SQLMesh installation and extras here](../installation.md#install-extras). @@ -41,7 +41,7 @@ Learn more about [SQLMesh installation and extras here](../installation.md#insta Prepare an existing dbt project to be run by SQLMesh by executing the `sqlmesh init` command *within the dbt project root directory* and with the `dbt` template option: ```bash -$ sqlmesh init -t dbt +sqlmesh init -t dbt ``` This will create a file called `sqlmesh.yaml` containing the [default model start date](../reference/model_configuration.md#model-defaults). This configuration file is a minimum starting point for enabling SQLMesh to work with your DBT project. @@ -247,8 +247,8 @@ Instead, SQLMesh provides predefined time macro variables that can be used in th For example, the SQL `WHERE` clause with the "ds" column goes in a new jinja block gated by `{% if sqlmesh_incremental is defined %}` as follows: ```bash -> WHERE -> ds BETWEEN '{{ start_ds }}' AND '{{ end_ds }}' + WHERE + ds BETWEEN '{{ start_ds }}' AND '{{ end_ds }}' ``` `{{ start_ds }}` and `{{ end_ds }}` are the jinja equivalents of SQLMesh's `@start_ds` and `@end_ds` predefined time macro variables. See all [predefined time variables](../concepts/macros/macro_variables.md) available in jinja. diff --git a/docs/integrations/dlt.md b/docs/integrations/dlt.md index 7125510de9..d8d38cb864 100644 --- a/docs/integrations/dlt.md +++ b/docs/integrations/dlt.md @@ -8,7 +8,7 @@ SQLMesh enables efforless project generation using data ingested through [dlt](h To load data from a dlt pipeline into SQLMesh, ensure the dlt pipeline has been run or restored locally. Then simply execute the sqlmesh `init` command *within the dlt project root directory* using the `dlt` template option and specifying the pipeline's name with the `dlt-pipeline` option: ```bash -$ sqlmesh init -t dlt --dlt-pipeline dialect +sqlmesh init -t dlt --dlt-pipeline dialect ``` This will create the configuration file and directories, which are found in all SQLMesh projects: @@ -33,7 +33,7 @@ SQLMesh will also automatically generate models to ingest data from the pipeline The default location for dlt pipelines is `~/.dlt/pipelines/`. If your pipelines are in a [different directory](https://dlthub.com/docs/general-usage/pipeline#separate-working-environments-with-pipelines_dir), use the `--dlt-path` argument to specify the path explicitly: ```bash -$ sqlmesh init -t dlt --dlt-pipeline --dlt-path dialect +sqlmesh init -t dlt --dlt-pipeline --dlt-path dialect ``` ### Generating models on demand @@ -43,25 +43,25 @@ To update the models in your SQLMesh project on demand, use the `dlt_refresh` co - **Generate all missing tables**: ```bash -$ sqlmesh dlt_refresh +sqlmesh dlt_refresh ``` - **Generate all missing tables and overwrite existing ones** (use with `--force` or `-f`): ```bash -$ sqlmesh dlt_refresh --force +sqlmesh dlt_refresh --force ``` - **Generate specific dlt tables** (using `--table` or `-t`): ```bash -$ sqlmesh dlt_refresh --table +sqlmesh dlt_refresh --table ``` - **Provide the explicit path to the pipelines directory** (using `--dlt-path`): ```bash -$ sqlmesh dlt_refresh --dlt-path +sqlmesh dlt_refresh --dlt-path ``` #### Configuration @@ -83,7 +83,7 @@ Load package 1728074157.660565 is LOADED and contains no failed jobs After the pipeline has run, generate a SQLMesh project by executing: ```bash -$ sqlmesh init -t dlt --dlt-pipeline sushi duckdb +sqlmesh init -t dlt --dlt-pipeline sushi duckdb ``` Then the SQLMesh project is all set up. You can then proceed to run the SQLMesh `plan` command to ingest the dlt pipeline data and populate the SQLMesh tables: diff --git a/docs/integrations/engines/bigquery.md b/docs/integrations/engines/bigquery.md index b93d6837ed..4ea4d1d222 100644 --- a/docs/integrations/engines/bigquery.md +++ b/docs/integrations/engines/bigquery.md @@ -22,7 +22,7 @@ Follow the [quickstart installation guide](../../installation.md) up to the step Instead of installing just SQLMesh core, we will also include the BigQuery engine libraries: ```bash -> pip install "sqlmesh[bigquery]" +pip install "sqlmesh[bigquery]" ``` ### Install Google Cloud SDK @@ -35,19 +35,19 @@ Follow these steps to install and configure the Google Cloud SDK on your compute - Unpack the downloaded file with the `tar` command: ```bash - > tar -xzvf google-cloud-cli-{SYSTEM_SPECIFIC_INFO}.tar.gz + tar -xzvf google-cloud-cli-{SYSTEM_SPECIFIC_INFO}.tar.gz ``` - Run the installation script: ```bash - > ./google-cloud-sdk/install.sh + ./google-cloud-sdk/install.sh ``` - Reload your shell profile (e.g., for zsh): ```bash - > source $HOME/.zshrc + source $HOME/.zshrc ``` - Run [`gcloud init` to setup authentication](https://cloud.google.com/sdk/gcloud/reference/init) @@ -114,7 +114,7 @@ The output will look something like this: We've verified our connection, so we're ready to create and execute a plan in BigQuery: ```bash -> sqlmesh plan +sqlmesh plan ``` ### View results in BigQuery Console diff --git a/docs/reference/configuration.md b/docs/reference/configuration.md index b13438ee2d..625bf62150 100644 --- a/docs/reference/configuration.md +++ b/docs/reference/configuration.md @@ -277,33 +277,33 @@ Example enabling debug mode for the CLI command `sqlmesh plan`: === "Bash" ```bash - $ sqlmesh --debug plan + sqlmesh --debug plan ``` ```bash - $ SQLMESH_DEBUG=1 sqlmesh plan + SQLMESH_DEBUG=1 sqlmesh plan ``` === "MS Powershell" ```powershell - PS> sqlmesh --debug plan + sqlmesh --debug plan ``` ```powershell - PS> $env:SQLMESH_DEBUG=1 - PS> sqlmesh plan + $env:SQLMESH_DEBUG=1 + sqlmesh plan ``` === "MS CMD" ```cmd - C:\> sqlmesh --debug plan + sqlmesh --debug plan ``` ```cmd - C:\> set SQLMESH_DEBUG=1 - C:\> sqlmesh plan + set SQLMESH_DEBUG=1 + sqlmesh plan ``` ## Runtime Environment