Amazon Redshift Data¶
Amazon Redshift manages all the work of setting up, operating, and scaling a data warehouse: provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. You can focus on using your data to acquire new insights for your business and customers.
Prerequisite Tasks¶
To use these operators, you must do a few things:
Create necessary resources using AWS Console or AWS CLI.
Install API libraries via pip.
pip install 'apache-airflow[amazon]'Detailed information is available Installation of Airflow™
Operators¶
Execute a statement on an Amazon Redshift cluster¶
Use the RedshiftDataOperator
to execute
statements against an Amazon Redshift cluster.
This differs from RedshiftSQLOperator
in that it allows users to query and retrieve data via the AWS API and avoid
the necessity of a Postgres connection.
create_table_redshift_data = RedshiftDataOperator(
task_id="create_table_redshift_data",
cluster_identifier=redshift_cluster_identifier,
database=DB_NAME,
db_user=DB_LOGIN,
sql="""
CREATE TABLE IF NOT EXISTS fruit (
fruit_id INTEGER,
name VARCHAR NOT NULL,
color VARCHAR NOT NULL
);
""",
poll_interval=POLL_INTERVAL,
wait_for_completion=True,
)