- Recipes
- Google Storage CSV file to Amazon S3 CSV file
Connect Google Storage CSV file and Amazon S3 CSV file in our serverless environment
Use this template to Read CSV file entries from Google Storage bucket using them to create CSV file entries in Amazon S3 Bucket.
Share
Read CSV file entries from Google Storage bucket
Used integrations:
JavaScript
Python
class GoogleStorageSourceReadRemoteCsv {
async init() {
// TODO: Create your google-storage credential
// More info at https://yepcode.io/docs/integrations/google-storage/#credential-configuration
const googleStorage = yepcode.integration.googleStorage(
"your-google-storage-credential-name"
);
// TODO: Customize your bucket name
this.bucket = googleStorage.bucket("your-bucket-name");
}
async fetch(publish, done) {
// TODO: Customize your csv file path to download
await this.bucket
.file("one-folder/my-filename-1653415231696.csv")
.createReadStream()
.pipe(
csv.parse({
delimiter: ",",
columns: true,
})
)
.on("data", publish)
.on("end", done);
}
async close() {}
}
import csv
import io
class GoogleStorageSourceReadRemoteCsv:
def setup(self):
# TODO: Create your Google Storage credential:
# More info at https://yepcode.io/docs/integrations/google-storage/#credential-configuration
self.storage_client = yepcode.integration.googleStorage("your-storage-credential-name")
# TODO: If your csv does no have headers, you can define them here as a list:
# self.fieldnames = ["column1", "column2", "column3"]
self.fieldnames = None
def generator(self):
# TODO: Customize your bucket name and object key
bucket = self.storage_client.get_bucket("bucket_name")
blob = bucket.blob("object_key")
bytes_stream = io.BytesIO()
blob.download_to_file(bytes_stream)
bytes_stream.seek(0)
csv_file_stream = io.TextIOWrapper(bytes_stream, encoding="utf-8")
reader = csv.DictReader(csv_file_stream, delimiter=",", fieldnames=self.fieldnames)
for row in reader:
yield row
def close(self):
pass
Do you need help solving this integration with YepCode?
Let's talkCreate CSV file entries in Amazon S3 Bucket
Used integrations:
JavaScript
Python
class AwsS3TargetUploadCsv {
async init() {
// TODO: Create your aws-s3 credential
// More info at https://yepcode.io/docs/integrations/aws-s3/#credential-configuration
this.awsS3 = yepcode.integration.awsS3("your-aws-s3-credential-name");
// Transforms the items into a csv format
this.stringifier = csv.stringify({
delimiter: ",",
});
this.targetStream = new PassThrough();
this.stringifier.pipe(this.targetStream);
// TODO: customize the Upload content
// More info at: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/interfaces/_aws_sdk_lib_storage.options-1.html
this.upload = new Upload({
client: this.awsS3,
params: {
Bucket: "your-bucket-name",
Key: "your-file-name.csv",
Body: this.targetStream,
},
});
this.upload.on("httpUploadProgress", (progress) => {
console.log(`Upload progress`, progress);
});
this.uploadPromise = this.upload.done();
}
async consume(item) {
// TODO: customize the csv row to create from your item content
const csvRow = [item.value, item.text];
this.stringifier.write(csvRow);
}
async close() {
try {
this.stringifier.end();
} catch (error) {
console.error(`Error ending stringifier`, error);
}
try {
await this.uploadPromise;
} catch (error) {
console.error(`Error ending upload`, error);
}
}
}
import csv
import io
class AccumulatingStream:
def __init__(self):
self.data = io.BytesIO()
def write(self, item):
self.data.write(item.encode("utf-8"))
def get_stream(self):
self.data.seek(0)
return self.data
class AwsS3TargetUploadCsv:
def setup(self):
# TODO: Create your S3 credential:
# More info at https://yepcode.io/docs/integrations/aws-s3/#credential-configuration
self.aws_s3_client = yepcode.integration.awsS3("your-s3-credential-name")
self.acc_stream = AccumulatingStream()
self.stringifier = csv.writer(self.acc_stream, delimiter=",")
def consume(self, generator, done):
for item in generator:
# TODO: customize the csv row to create from your item content
csv_row = [item["value"], item["text"]]
self.stringifier.writerow(csv_row)
done()
def close(self):
# TODO: customize the bucket name and object key
try:
self.aws_s3_client.upload_fileobj(
self.acc_stream.get_stream(),
"bucket-name",
"path/to/object.csv",
)
except Exception as error:
print(f"Error uploading object: {error}")
FAQs
YepCode is a SaaS platform that allows to create, execute and monitor integrations and automations using source code in a serverless environment.
We like to call it the Zapier for developers, since we bring all the agility and benefits of NoCode tools (avoid server provisioning, environment configuration, deployments,...), but with all the power of being able to use a programming language like JavaScript or Python.
These recipes are a good starting point for you to build your own YepCode processes and solve your integration and automation problems.
You only have to fill the sign up form and your account will be created with our FREE plan (no credit card required).
YepCode has been created with a clear enterprise approach (multi-tenant environment, team management, high security and auditing standards, IdP integrations, on-premise options,...) so we can be the Swiss army knife of any team of engineering, especially those that need to extract or send information to external systems, and where a certain dynamism or adaptation to change is necessary in that process.
Sure! You just need to do some configuration to allow YepCode servers to connect to that service. Check our docs page to get more information.