<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Vazid | Blog]]></title><description><![CDATA[Multi-Cloud DevOps Engineer | Skilled in AWS, GCP, Azure cloud environments.]]></description><link>https://blog.vazid.live</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 21:58:24 GMT</lastBuildDate><atom:link href="https://blog.vazid.live/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How to Set Up Personal and Office GCP Accounts on Your Local System ]]></title><description><![CDATA[When you work with both personal and office Google Cloud accounts, it’s easy to mix up credentials on your local machine. In this guide, you’ll learn how to:
Prerequisites
Before you follow this guide]]></description><link>https://blog.vazid.live/how-to-set-up-personal-and-office-gcp-accounts-on-your-local-system</link><guid isPermaLink="true">https://blog.vazid.live/how-to-set-up-personal-and-office-gcp-accounts-on-your-local-system</guid><category><![CDATA[GCP]]></category><category><![CDATA[GCP DevOps]]></category><category><![CDATA[Bash]]></category><category><![CDATA[zsh]]></category><category><![CDATA[Productivity]]></category><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Tue, 07 Apr 2026 10:28:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/61fe1129fd8b083e5728bdbe/2b3b7262-597c-4233-a401-04ec7de24602.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When you work with both personal and office Google Cloud accounts, it’s easy to mix up credentials on your local machine. In this guide, you’ll learn how to:</p>
<h2>Prerequisites</h2>
<p>Before you follow this guide, make sure you have:</p>
<ol>
<li><p><strong>Google Cloud SDK (</strong><code>gcloud</code><strong>) installed</strong><br />Install the official Google Cloud SDK for your operating system:</p>
<ul>
<li>Docs: <a href="https://cloud.google.com/sdk/docs/install">https://cloud.google.com/sdk/docs/install</a></li>
</ul>
</li>
<li><p><strong>Two separate Google accounts</strong></p>
<ul>
<li><p>One <strong>personal</strong> Google account (e.g., your Gmail)</p>
</li>
<li><p>One <strong>office/work</strong> Google account managed by your organization</p>
</li>
</ul>
</li>
<li><p><strong>Existing</strong> <code>gcloud</code> <strong>configurations (optional but recommended)</strong><br />It’s helpful to have named <code>gcloud</code> configurations for each account, for example:</p>
<ul>
<li><p><code>personal</code></p>
</li>
<li><p><code>office</code> (or any name you prefer for your office account)</p>
</li>
</ul>
<p>You can check existing configurations with:</p>
<pre><code class="language-bash">gcloud config configurations list
</code></pre>
</li>
<li><p><strong>Basic familiarity with your shell</strong><br />You should be comfortable with:</p>
<ul>
<li><p>Editing your shell config file (<code>~/.bashrc</code> or <code>~/.zshrc</code>)</p>
</li>
<li><p>Running commands in your terminal</p>
</li>
</ul>
</li>
</ol>
<p>With these prerequisites in place, you’re ready to set up separate ADC profiles and account-switching aliases.</p>
<ul>
<li><p>Create separate Application Default Credentials (ADC) profiles for personal and office accounts</p>
</li>
<li><p>Switch between them quickly using simple shell aliases</p>
</li>
</ul>
<hr />
<h2>Create an ADC profile for your personal account</h2>
<p>First, log in with your personal Google account using <code>gcloud</code>:</p>
<pre><code class="language-shell">gcloud auth application-default login
</code></pre>
<p>This command will print a URL. Open it in your browser and log in using your personal Gmail account. Once you finish authentication, <code>gcloud</code> will generate an ADC file at:</p>
<p><code>~/.config/gcloud/application_default_credentials.json</code></p>
<p><strong>Create</strong> adc_personal.json</p>
<p>Now copy the newly generated ADC file to a dedicated file for your personal account:</p>
<pre><code class="language-shell">cp ~/.config/gcloud/application_default_credentials.json 
~/.config/gcloud/adc_personal.json
</code></pre>
<p>We create a separate file because a single application_default_credentials.json cannot store multiple accounts. Each account needs its own ADC file.</p>
<hr />
<h2>Create an ADC profile for your office account</h2>
<p>Repeat the process, this time using your office Google account.</p>
<p>First, log in with your office email:</p>
<pre><code class="language-shell">gcloud auth application-default login
</code></pre>
<p>Again, open the URL in your browser and authenticate with your office account. After login, the same <code>application_default_credentials.json</code> file will be updated with your office credentials.</p>
<p><strong>Create</strong> adc_office.json</p>
<p>Now copy that file to a dedicated ADC file for your office account:</p>
<pre><code class="language-shell">cp ~/.config/gcloud/application_default_credentials.json 
~/.config/gcloud/adc_office.json
</code></pre>
<p>At this point, you should have:<br /><code>~/.config/gcloud/adc_personal.json → personal credentials ~/.config/gcloud/adc_office.json → office credentials</code></p>
<hr />
<h2>Create GCP account switcher aliases</h2>
<p>To quickly switch between accounts, you can define shell aliases that:</p>
<ol>
<li><p>Activate the correct <code>gcloud</code> configuration</p>
</li>
<li><p>Set the default project</p>
</li>
<li><p>Point <code>GOOGLE_APPLICATION_CREDENTIALS</code> to the right ADC file</p>
</li>
</ol>
<p><strong>First Check which shell you’re using Run:</strong></p>
<pre><code class="language-shell">echo $0
</code></pre>
<p>If the output contains <code>bash</code>, edit:</p>
<pre><code class="language-shell">vi ~/.bashrc
</code></pre>
<p>If it contains <code>zsh</code>, edit:</p>
<pre><code class="language-shell">vi ~/.zshrc
</code></pre>
<p><strong>Add the aliases</strong></p>
<p>Append the following lines at the end of your shell config file (<code>.bashrc or .zshrc</code>). Make sure to replace <code>&lt;project_id&gt;</code> with your actual GCP project IDs.</p>
<pre><code class="language-shell">###-------- GCP ACCOUNT SWITCHER -------------
###--- PERSONAL SWITCH ---

alias gcp-personal=' gcloud config configurations activate personal; 
gcloud config set project &lt;personal_project_id&gt;; 
export GOOGLE_CLOUD_PROJECT="&lt;personal_project_id&gt;"; 
export GOOGLE_APPLICATION_CREDENTIALS="$HOME/.config/gcloud/adc_personal.json"; 
echo "Switched to PERSONAL (Gmail)"'

###--- OFFICE SWITCH ---

alias gcp-office=' gcloud config configurations activate office; 
gcloud config set project &lt;office_project_id&gt;; 
export GOOGLE_CLOUD_PROJECT="&lt;office_project_id&gt;"; 
export GOOGLE_APPLICATION_CREDENTIALS="$HOME/.config/gcloud/adc_office.json"; 
echo "Switched to Office (Office)"' 
</code></pre>
<p>After editing the file, reload your shell configuration:</p>
<pre><code class="language-shell"># For bash
source ~/.bashrc

# For zsh
source ~/.zshrc
</code></pre>
<p><strong>Switching between accounts</strong></p>
<p>Now you can switch accounts with a single command:</p>
<pre><code class="language-shell">##Switch to personal GCP account
gcp-personal

##Switch to office GCP account
gcp-office
</code></pre>
<p>Each alias will:</p>
<ul>
<li><p>Activate the right <code>gcloud</code> configuration</p>
</li>
<li><p>Set the appropriate default project</p>
</li>
<li><p>Point <code>GOOGLE_APPLICATION_CREDENTIALS</code> to the correct ADC file</p>
</li>
</ul>
<hr />
<h2>Conclusion</h2>
<p>This setup helps you avoid accidental deployments to the wrong project and makes working with multiple GCP accounts much smoother.</p>
]]></content:encoded></item><item><title><![CDATA[Securing GCS Bucket Access for GCP VMs: Best Practices Explained]]></title><description><![CDATA[In this lab we will see how to access the GCS bucket data to GCP VM in most secure manner.
What we’ll cover

Cloud Storage FUSE

Service Account

Linux User

GCP VM

Cloud KMS


What you’ll need

Goog]]></description><link>https://blog.vazid.live/securing-gcs-bucket-access-for-gcp-vms-best-practices-explained</link><guid isPermaLink="true">https://blog.vazid.live/securing-gcs-bucket-access-for-gcp-vms-best-practices-explained</guid><category><![CDATA[GCP]]></category><category><![CDATA[#gcs]]></category><category><![CDATA[vm]]></category><category><![CDATA[compute engine]]></category><category><![CDATA[google cloud]]></category><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Sun, 21 Dec 2025 14:01:08 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766304178498/a882da12-cc8d-4467-b0aa-9d57f74b7ff9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this lab we will see how to access the GCS bucket data to GCP VM in most secure manner.</p>
<h1>What we’ll cover</h1>
<ul>
<li><p>Cloud Storage FUSE</p>
</li>
<li><p>Service Account</p>
</li>
<li><p>Linux User</p>
</li>
<li><p>GCP VM</p>
</li>
<li><p>Cloud KMS</p>
</li>
</ul>
<h1>What you’ll need</h1>
<ul>
<li><p>Google Cloud Admin access</p>
</li>
<li><p>Access to VM and a Bucket</p>
</li>
<li><p>Cloud KMS access</p>
</li>
</ul>
<p>Here is flowchart on this lab</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766317568819/08419075-bfb4-4c8f-ae63-493a7f313e14.png" alt="" style="display:block;margin:0 auto" />

<p>Lets first create the <a href="https://docs.cloud.google.com/kms/docs/key-management-service">Cloud KMS</a> keyring and Cryptographic Key</p>
<h3>Setup Cloud KMS</h3>
<p>Cloud KMS help in building the <a href="https://en.wikipedia.org/wiki/Data_sovereignty"><strong>data sovereignty</strong></a> between GCS bucekt and VM, It ensures that even if someone get access to physical disk in Google data’s center, they cannot read your files without your cryptographic keys.</p>
<p>We are using this Cloud KMS to setup most secure infra for our data, from virtual to physical data will be secure in this data transaction</p>
<pre><code class="language-bash">gcloud kms keyrings create secure-bucket-keyring --location us-central1
</code></pre>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766316679563/e69c7b4e-3b6d-43b7-aafe-78412f7c0578.png" alt="" style="display:block;margin:0 auto" />

<p>This keyring will be use by the Cloud bucket and VM to decrypt the data</p>
<h3>Create Service account to user to use this KMS key</h3>
<pre><code class="language-bash">export SA_NAME="secure-vm-sa"
gcloud iam service-accounts create $SA_NAME --display-name "Secure VM Access"
</code></pre>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766320619644/e4189818-964b-46b2-8691-66d3bbd1153e.png" alt="Service account" style="display:block;margin:0 auto" />

<p>Grant permission to us Cloud KMS key</p>
<pre><code class="language-bash">export REGION="us-central1"
export KEY_RING="secure-bucket-keyring"
export KEY_NAME="secure-vm-bucket-key"
export SA_EMAIL="\(SA_NAME@\)PROJECT_ID.iam.gserviceaccount.com"
gcloud kms keys add-iam-policy-binding $KEY_NAME \
    --location $REGION \
    --keyring $KEY_RING \
    --member "serviceAccount:$SA_EMAIL" \
    --role "roles/cloudkms.cryptoKeyEncrypterDecrypter"
</code></pre>
<p>Now will grant the bucket to access the Cloud KMS key</p>
<pre><code class="language-bash">export PROJECT_ID=$(gcloud config get-value project)
export PROJECT_NUMBER=\((gcloud projects describe \)PROJECT_ID --format="value(projectNumber)")
export STORAGE_AGENT="service-$PROJECT_NUMBER@gs-project-accounts.iam.gserviceaccount.com"
gcloud kms keys add-iam-policy-binding $KEY_NAME \
    --location $REGION \
    --keyring $KEY_RING \
    --member "serviceAccount:$STORAGE_AGENT" \
    --role "roles/cloudkms.cryptoKeyEncrypterDecrypter" 
</code></pre>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766321507665/a2c27d47-9662-4ea1-999a-319efa53601b.png" alt="Cloud KMS Decrypter" style="display:block;margin:0 auto" />

<p>Now we will create Bucket</p>
<hr />
<h2>Bucket setup</h2>
<p>We will apply four security layers in a single command:</p>
<ol>
<li><p><strong>Enforce Public Access Prevention:</strong> Hard block on "allUsers" (internet).</p>
</li>
<li><p><strong>Uniform Bucket-Level Access:</strong> Disables messy ACLs (Access Control Lists); forces strict IAM usage.</p>
</li>
<li><p><strong>Default CMEK Encryption:</strong> Forces your KMS key on every file immediately.</p>
</li>
<li><p><strong>Versioning:</strong> Protects against ransomware or accidental deletion (you can "undelete" files).</p>
</li>
</ol>
<hr />
<pre><code class="language-bash">BUCKET_NAME="secure-bucket-lab-2025"
REGION="us-central1"
PROJECT_ID=$(gcloud config get-value project)
KMS_KEY="projects/\(PROJECT_ID/locations/\)REGION/keyRings/secure-bucket-keyring/cryptoKeys/secure-vm-bucket-key"
</code></pre>
<blockquote>
<p>Note: Create a bucket with unique name please don’t use this name</p>
</blockquote>
<p>Now let create secure bucket with Cloud KMS key, remove public access, and add uniform bucket level access and enable versioning</p>
<pre><code class="language-bash">gcloud storage buckets create gs://$BUCKET_NAME \
    --location=$REGION \
    --project=$PROJECT_ID \
    --public-access-prevention \
    --uniform-bucket-level-access \
    --default-encryption-key=$KMS_KEY
</code></pre>
<p>Enable versioning to bucket</p>
<pre><code class="language-bash">gcloud storage buckets update gs://$BUCKET_NAME --versioning
</code></pre>
<p>Grant a service account access to bucket</p>
<pre><code class="language-bash">gcloud storage buckets add-iam-policy-binding gs://$BUCKET_NAME \
    --member "serviceAccount:$SA_EMAIL" \
    --role "roles/storage.objectUser"
</code></pre>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766322283657/366a0379-f839-4428-9456-c60b9753837f.png" alt="Bucket access" style="display:block;margin:0 auto" />

<p>Now we will create the VM</p>
<pre><code class="language-bash">VM_NAME="secure-vm"
ZONE="us-central1-a"
PROJECT_ID=$(gcloud config get-value project)
SA_EMAIL="secure-vm-sa@$PROJECT_ID.iam.gserviceaccount.com"

gcloud compute instances create $VM_NAME \
    --zone=$ZONE \
    --machine-type=e2-medium \
    --image-family=ubuntu-2204-lts \
    --image-project=ubuntu-os-cloud \
    --service-account=$SA_EMAIL \
    --scopes=cloud-platform \
    --shielded-secure-boot \
    --shielded-vtpm \
    --shielded-integrity-monitoring \
    --tags=secure-ssh
</code></pre>
<p>We added these thing to make it more secure</p>
<ul>
<li><p><code>--shielded-secure-boot</code>: Ensures the OS hasn't been tampered with before it loads.</p>
</li>
<li><p><code>--service-account</code>: It never uses the "Default" editor account, only your restricted one.</p>
</li>
<li><p><code>--tags=secure-ssh</code>: We will use this tag to create a specific firewall rule next.</p>
</li>
</ul>
<p>Now secure the Firewall to allow access with only Cloud IAP</p>
<pre><code class="language-bash">gcloud compute firewall-rules create allow-ssh-ingress-from-iap \
    --direction=INGRESS \
    --action=allow \
    --rules=tcp:22 \
    --source-ranges=35.235.240.0/20 \
    --target-tags=secure-ssh
</code></pre>
<p>Use this ssh command to access the VM in Cloud IAM ssh tunnel</p>
<pre><code class="language-bash">gcloud compute ssh \(VM_NAME --zone=\)ZONE --tunnel-through-iap
</code></pre>
<h3>Install Dependencies on VM</h3>
<p>Google <a href="https://docs.cloud.google.com/storage/docs/cloud-storage-fuse/overview">Cloud Storage FUSE</a></p>
<pre><code class="language-bash">export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb https://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

sudo apt-get update
sudo apt-get install -y gcsfuse
</code></pre>
<p>Check your user id</p>
<pre><code class="language-bash">id &lt;username&gt;
</code></pre>
<p>Create new users and group then</p>
<pre><code class="language-bash">sudo groupadd -g 2000 app_group
sudo useradd -u 2000 -g 2000 -m -s /bin/bash app_user
</code></pre>
<p>create secure folder and give access to the particular user</p>
<pre><code class="language-bash">sudo mkdir -p /home/app_user/secure_data
sudo chown app_user:app_group /home/app_user/secure_data
sudo chmod 700 /home/app_user/secure_data
</code></pre>
<p>User mount to fuse.conf file</p>
<pre><code class="language-bash">sudo sed -i 's/#user_allow_other/user_allow_other/g' /etc/fuse.conf
</code></pre>
<p>Mount the bucket data to persistent volume</p>
<pre><code class="language-bash">BUCKET_NAME="secure-bucket-lab-2025" 
echo "$BUCKET_NAME /home/app_user/secure_data gcsfuse rw,_netdev,allow_other,implicit_dirs,uid=2000,gid=2000,file_mode=600,dir_mode=700,noexec,nosuid,nodev 0 0" | sudo tee -a /etc/fstab
sudo mount -a
</code></pre>
<p>Now switch to user and verify the folder</p>
<pre><code class="language-bash">sudo su - app_user
ls -ld secure_data/
</code></pre>
<p>This will the folder <strong>secure_data</strong> with <em><strong>drwx------ 1 app_user app_group 0</strong></em></p>
<p>Now verify the mount volume</p>
<pre><code class="language-bash">df -h /home/app_user/secure_data
</code></pre>
<p>Here you can see the avail size 1.0P which is significant saying Google Storage bucket in petabyte of storage</p>
<h1>Test data access</h1>
<p>Push object into bucket</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766325169949/83cb758a-6d10-4114-b22a-f70498f8886c.png" alt="Cloud Storage Bucket" style="display:block;margin:0 auto" />

<p>Now <em><strong>ls</strong></em> inside the <strong>secure_data</strong> folder</p>
<img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1766325133539/82e434fe-8f0a-4343-8ac1-fca758ae04d5.png" alt="GCP VM" style="display:block;margin:0 auto" />

<hr />
<blockquote>
<p>In this lab, we explore securing data access between Google Cloud Storage (GCS) buckets and Google Cloud Platform (GCP) virtual machines (VMs) using Cloud Key Management Service (KMS) and other security measures. Key steps include setting up a KMS keyring and cryptographic key, creating a service account for secure VM access, configuring a secure bucket with four layers of security (preventing public access, enforcing uniform bucket-level access, default CMEK encryption, and versioning), and setting up the VM with specific security configurations. We also cover using Cloud Storage FUSE for data access, creating secure user groups, and mounting the bucket data as a persistent volume. Finally, we verify data accessibility and security configurations on the VM.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Effective Strategies for Maintaining Multiple GitHub Accounts]]></title><description><![CDATA[Managing multiple GitHub accounts can be a hassle, especially when balancing personal, work, and client projects. This guide will help you streamline the process by automating commits and pushes.
Step 1: Organize Your Folders
To maintain multiple Git...]]></description><link>https://blog.vazid.live/effective-strategies-for-maintaining-multiple-github-accounts</link><guid isPermaLink="true">https://blog.vazid.live/effective-strategies-for-maintaining-multiple-github-accounts</guid><category><![CDATA[GitHub]]></category><category><![CDATA[Git]]></category><category><![CDATA[user]]></category><category><![CDATA[account]]></category><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Sat, 19 Jul 2025 20:05:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1752955439443/5938437f-9efc-456e-855f-1c4a69bd5859.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Managing multiple GitHub accounts can be a hassle, especially when balancing personal, work, and client projects. This guide will help you streamline the process by automating commits and pushes.</p>
<h1 id="heading-step-1-organize-your-folders"><strong>Step 1:</strong> Organize Your Folders</h1>
<p>To maintain multiple GitHub account in your system best way to do is create separate for each of them such as</p>
<pre><code class="lang-bash">mkdir -p ~/Personal
mkdir -p ~/Work
mkdir -p ~/client1
</code></pre>
<p>Here in each of folder we can create work in each folder and each folder can align to correct git credentials</p>
<h1 id="heading-step-2-create-config-files-for-each-account"><strong>Step 2:</strong> Create Config Files for Each Account</h1>
<pre><code class="lang-bash">nano ~/.gitconfig-personal
</code></pre>
<p>Add the following content. <strong>Replace the email with your actual personal email.</strong></p>
<pre><code class="lang-ini"><span class="hljs-section">[user]</span>
    <span class="hljs-attr">name</span> = your-personal-github-username
    <span class="hljs-attr">email</span> = your-personal-email@example.com
</code></pre>
<pre><code class="lang-bash">nano ~/.gitconfig-work
</code></pre>
<p>Add the following content. <strong>Replace the email with your actual work email.</strong></p>
<pre><code class="lang-ini"><span class="hljs-section">[user]</span>
    <span class="hljs-attr">name</span> = your-work-github-username
    <span class="hljs-attr">email</span> = your-work-email@example.com
</code></pre>
<pre><code class="lang-bash">nano ~/.gitconfig-client-1
</code></pre>
<p>Add the following content. <strong>Replace the email with your actual client1 email.</strong></p>
<pre><code class="lang-ini"><span class="hljs-section">[user]</span>
    <span class="hljs-attr">name</span> = your-client1-github-username
    <span class="hljs-attr">email</span> = your-client1-email@example.com
</code></pre>
<h1 id="heading-step-3-update-your-main-git-config"><strong>Step 3:</strong> Update Your Main Git Config</h1>
<p>Now, edit your main <code>~/.gitconfig</code> file to automatically use the correct config file based on the project's path.</p>
<p>Open <code>~/.gitconfig</code> and make it look like this. This sets your personal account as the default and creates rules for your work directory.</p>
<p>Usually when your are working on work laptop or system you should keep default as work username and email then Set your work account as the default on your work laptop to ensure commits use the correct credentials</p>
<pre><code class="lang-ini"><span class="hljs-comment"># This is your default (work) user config</span>
<span class="hljs-section">[user]</span>
    <span class="hljs-attr">name</span> = your-work-github-username
    <span class="hljs-attr">email</span> = your-personal-email@example.com

<span class="hljs-comment"># --- Automatic Profile Switching ---</span>
<span class="hljs-comment"># If the project is inside ~/dev/work/, use the work config</span>
<span class="hljs-section">[includeIf "gitdir:~/Personal"]</span>
    <span class="hljs-attr">path</span> = ~/.gitconfig-personal

<span class="hljs-section">[includeIf "gitdir:~/client1"]</span>
    <span class="hljs-attr">path</span> = ~/.gitconfig-client1
</code></pre>
<p>With this setup, Git will automatically use the correct name and email for your commits.</p>
<h1 id="heading-step-4-manage-ssh-keys-for-authentication">Step 4: Manage SSH Keys for Authentication</h1>
<p>To push and pull from different accounts on the same service (like GitHub), you need separate SSH keys.</p>
<h2 id="heading-1-generate-two-ssh-keys">1. Generate Two SSH Keys</h2>
<p>Create a unique SSH key for each account.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Personal key</span>
ssh-keygen -t ed25519 -C <span class="hljs-string">"your-personal-email@example.com"</span> -f ~/.ssh/id_ed25519_personal

<span class="hljs-comment"># Work key</span>
ssh-keygen -t ed25519 -C <span class="hljs-string">"your-work-email@example.com"</span> -f ~/.ssh/id_ed25519_work

<span class="hljs-comment"># Client key</span>
ssh-keygen -t ed25519 -C <span class="hljs-string">"your-client1-email@example.com"</span> -f ~/.ssh/id_ed25519_client1
</code></pre>
<p>If you’re thinking what that <code>id_ed25519</code>: This is the cryptographic <strong>algorithm</strong> used to generate the key. Ed25519 is a modern, secure, and fast choice for SSH keys.</p>
<p>When prompted for a passphrase, you can either enter one for extra security or press Enter to skip.</p>
<h2 id="heading-2-add-keys-to-your-accounts">2. Add Keys to Your Accounts</h2>
<p>Copy the contents of each <strong>public</strong> key (<code>.pub</code> file) and add them to your corresponding GitHub accounts in <strong>Settings &gt; SSH and GPG keys</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1752952619315/574f1283-222d-42c3-9819-591d7bbbbcb7.png" alt /></p>
<p>Copy the contents of each <strong>public</strong> key (<code>.pub</code> file) and add them to your corresponding GitHub accounts in <strong>Settings &gt; SSH and GPG keys</strong>.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Copy your personal public key</span>
pbcopy &lt; ~/.ssh/id_ed25519_personal.pub

<span class="hljs-comment"># Copy your work public key</span>
pbcopy &lt; ~/.ssh/id_ed25519_work.pub

<span class="hljs-comment"># Copy your client public key</span>
pbcopy &lt; ~/.ssh/id_ed25519_client1.pub
</code></pre>
<h2 id="heading-3-configure-ssh-to-use-the-correct-key">3. Configure SSH to Use the Correct Key</h2>
<p>Create or edit your SSH config file at <code>~/.ssh/config</code>. This file tells your computer which key to use for which account.</p>
<pre><code class="lang-bash">nano ~/.ssh/config
</code></pre>
<p>Add the following configuration:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Personal GitHub account</span>
Host github.com-personal
    HostName github.com
    User git
    IdentityFile ~/.ssh/id_ed25519_personal
    IdentitiesOnly yes

<span class="hljs-comment"># Work GitHub account</span>
Host github.com-work
    HostName github.com
    User git
    IdentityFile ~/.ssh/id_ed25519_work
    IdentitiesOnly yes

<span class="hljs-comment"># Client1 GitHub account</span>
Host github.com-client1
    HostName github.com
    User git
    IdentityFile ~/.ssh/id_ed25519_client1
    IdentitiesOnly yes
</code></pre>
<h2 id="heading-4-clone-repos-using-custom-hosts">4. Clone Repos Using Custom Hosts</h2>
<p>From now on, when you clone a repository, you must use the custom host you just defined. This ensures the correct SSH key is used.</p>
<ul>
<li><p><strong>To clone a personal repo:</strong></p>
<pre><code class="lang-bash">  git <span class="hljs-built_in">clone</span> git@github.com-personal:your-personal-username/personal-repo.git
</code></pre>
</li>
<li><p><strong>To clone a work repo:</strong></p>
<pre><code class="lang-bash">  git <span class="hljs-built_in">clone</span> git@github.com-work:your-work-username/work-repo.git
</code></pre>
</li>
<li><p><strong>To clone a client1 repo:</strong></p>
<pre><code class="lang-bash">  git <span class="hljs-built_in">clone</span> git@github.com-work:your-client1-username/client1-repo.gitYour
</code></pre>
</li>
</ul>
<p>system is now fully configured to manage both accounts automatically.</p>
<h2 id="heading-5-update-your-git-repositorys-remote-url">5. Update Your Git Repository's Remote URL</h2>
<p>Finally, navigate to the local repository you want to fix and tell it to use the new SSH URL.</p>
<ul>
<li><p><strong>For a personal repository (like your</strong> personal project):</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">cd</span> ~/Personal/vazid
  git remote set-url origin git@github.com-personal:your-personal-usernmae/repo.git
</code></pre>
</li>
<li><p><strong>For a work repository:</strong></p>
<pre><code class="lang-bash">  <span class="hljs-built_in">cd</span> ~/Work/my-work-project
  git remote set-url origin git@github.com-work:your-work-username/work-repo.git
</code></pre>
</li>
<li><p><strong>For a work repository:</strong></p>
<pre><code class="lang-bash">  <span class="hljs-built_in">cd</span> ~/client1/my-work-project
  git remote set-url origin git@github.com-client1:your-client1-username/client1-repo.git
</code></pre>
</li>
</ul>
<p>You are now fully set up to push and pull using the correct account automatically.</p>
<p>Thank You</p>
]]></content:encoded></item><item><title><![CDATA[Introducing the Coffee House Theme for Ghostty Terminal]]></title><description><![CDATA[If you are a Coffee lover working hard and missing coffee so The Coffee House Theme for Ghostty terminal is here to add style, comfort, and a caffeine-inspired vibe to your terminal.
Key Changes of the Coffee House Theme
1. Coffee-Inspired Palette
Th...]]></description><link>https://blog.vazid.live/introducing-the-coffee-house-theme-for-ghostty-terminal</link><guid isPermaLink="true">https://blog.vazid.live/introducing-the-coffee-house-theme-for-ghostty-terminal</guid><category><![CDATA[ghostty]]></category><category><![CDATA[terminal]]></category><category><![CDATA[Linux]]></category><category><![CDATA[config]]></category><category><![CDATA[theme]]></category><category><![CDATA[themes]]></category><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Sat, 04 Jan 2025 16:51:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736008997329/e715e534-c92b-4eb6-9a19-05bd25cad467.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you are a Coffee lover working hard and missing coffee so The <strong>Coffee House Theme</strong> for Ghostty terminal is here to add style, comfort, and a caffeine-inspired vibe to your terminal.</p>
<h2 id="heading-key-changes-of-the-coffee-house-theme">Key Changes of the Coffee House Theme</h2>
<h3 id="heading-1-coffee-inspired-palette">1. <strong>Coffee-Inspired Palette</strong></h3>
<p>The theme features rich browns, warm golds, and muted jewel tones that evoke the coziness of a coffee shop:</p>
<ul>
<li><p><strong>Foreground:</strong> Creamy coffee <code>#E5D3B3</code></p>
</li>
<li><p><strong>Background:</strong> Dark espresso <code>#1C1715</code></p>
</li>
<li><p><strong>Cursor:</strong> Golden caramel <code>#D4A373</code></p>
</li>
</ul>
<h3 id="heading-2-transparency">2. <strong>Transparency</strong></h3>
<p>The background is slightly transparent (<code>background-opacity = 0.85</code>), blending seamlessly with your desktop for a modern and sophisticated look. and change as per your aesthetics</p>
<h3 id="heading-3-title-customization">3. <strong>Title Customization</strong></h3>
<p>The terminal title, set to <code>"☕ Coffee House"</code>, adds a unique and personal touch.</p>
<h3 id="heading-4-font-settings">4. <strong>Font Settings</strong></h3>
<p>With the clean and modern <code>Source Code Pro</code> font, the theme ensures excellent readability.</p>
<p>The configuration file for Ghostty terminal is typically located at:</p>
<p><code>~/.config/ghostty/config</code></p>
<p>Update the config file to the below config file</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Ghostty Terminal Configuration</span>
<span class="hljs-comment"># Coffee House</span>

<span class="hljs-comment"># Window titlebar colors (only effective with ghostty theme)</span>
window-titlebar-background = <span class="hljs-comment">#362F2D  </span>
window-titlebar-foreground = <span class="hljs-comment">#E5D3B3  </span>

<span class="hljs-comment"># Terminal transparency</span>
background-opacity = 0.85

<span class="hljs-comment"># Title for the terminal</span>
title  = <span class="hljs-string">"☕ Coffee House"</span>

<span class="hljs-comment"># Window theme</span>
<span class="hljs-comment"># Remove this if you are not using linux</span>
window-theme = ghostty

<span class="hljs-comment"># Coffee-inspired color scheme with chocolate accents</span>
foreground = <span class="hljs-comment">#E5D3B3  </span>
background = <span class="hljs-comment">#1C1715</span>
cursor-color = <span class="hljs-comment">#D4A373 </span>

<span class="hljs-comment"># Normal colors (coffee and chocolate tones)</span>
palette = 0=<span class="hljs-comment">#3A2C28  </span>
palette = 1=<span class="hljs-comment">#A64D4D  </span>
palette = 2=<span class="hljs-comment">#7B6D49  </span>
palette = 3=<span class="hljs-comment">#C9A16C  </span>
palette = 4=<span class="hljs-comment">#6E4F4B  </span>
palette = 5=<span class="hljs-comment">#A67D94  </span>
palette = 6=<span class="hljs-comment">#8D7B6E  </span>
palette = 7=<span class="hljs-comment">#E5D3B3  </span>

<span class="hljs-comment"># Bright colors (wrapper and highlights)</span>
palette = 8=<span class="hljs-comment">#4D3E39  </span>
palette = 9=<span class="hljs-comment">#BF7F7F  </span>
palette = 10=<span class="hljs-comment">#9E8D6C </span>
palette = 11=<span class="hljs-comment">#D4B98B  </span>
palette = 12=<span class="hljs-comment">#957572  </span>
palette = 13=<span class="hljs-comment">#BA94B4  </span>
palette = 14=<span class="hljs-comment">#B7A493  </span>
palette = 15=<span class="hljs-comment">#FFF5E1  </span>

<span class="hljs-comment"># Font settings</span>
font-family = <span class="hljs-string">"Source Code Pro"</span>
font-size = 12
font-style = <span class="hljs-string">"Medium"</span>
</code></pre>
<p>After saving the changes, restart Ghostty terminal to apply the new theme.</p>
<p>or use shortcut key <code>shift+ctrl+,</code></p>
<p>Here are some Screenshot for the updated ghostty terminal</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736008735689/e8c3b06c-b318-49a3-b1e0-b5209474d57d.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736008673754/25fa4323-a331-4f56-b6b3-b2deaf594c7f.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1736008432782/d8734938-be25-4ae0-bd92-71750f9685b7.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-share-your-setup">Share Your Setup!</h2>
<p>Enjoy coding in your cozy new terminal setup. Happy brewing! ☕</p>
]]></content:encoded></item><item><title><![CDATA[How to Automate System Updates Using a Custom Bash Script]]></title><description><![CDATA[This blog post introduces a Bash script designed to simplify Linux system updates across various distributions. With this script, you can:

Perform system updates with a single command.

Schedule updates using cron jobs for convenience.

Log every up...]]></description><link>https://blog.vazid.live/how-to-automate-system-updates-using-a-custom-bash-script</link><guid isPermaLink="true">https://blog.vazid.live/how-to-automate-system-updates-using-a-custom-bash-script</guid><category><![CDATA[Linux]]></category><category><![CDATA[Bash]]></category><category><![CDATA[cronjob]]></category><category><![CDATA[logging]]></category><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Sat, 04 Jan 2025 09:06:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/4rym6Ltlq-E/upload/95d2c3ca9f862528202f21ec60796fab.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This blog post introduces a Bash script designed to simplify Linux system updates across various distributions. With this script, you can:</p>
<ul>
<li><p>Perform system updates with a single command.</p>
</li>
<li><p>Schedule updates using cron jobs for convenience.</p>
</li>
<li><p>Log every update for better tracking and troubleshooting.</p>
</li>
</ul>
<hr />
<h4 id="heading-prerequisites"><strong>Prerequisites</strong></h4>
<p>Before using this script, ensure you have:</p>
<ul>
<li><p>Root/sudo access on your Linux system.</p>
</li>
<li><p>Basic knowledge of Linux commands.</p>
</li>
<li><p>Sufficient disk space for updates.</p>
</li>
<li><p>A stable internet connection.</p>
</li>
</ul>
<hr />
<h3 id="heading-the-script"><strong>The Script</strong></h3>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>

RELEASE_FILE=<span class="hljs-string">"/etc/os-release"</span>
LOG_DIR=<span class="hljs-string">"/var/log/system_updates"</span>
DATE=$(date <span class="hljs-string">'+%Y-%m-%d_%H-%M-%S'</span>)
LOG_FILE=<span class="hljs-string">"<span class="hljs-variable">$LOG_DIR</span>/system_update_<span class="hljs-variable">$DATE</span>.log"</span>
LOCK_FILE=<span class="hljs-string">"/tmp/system_update.lock"</span>
MAX_RETRIES=3
RETRY_DELAY=5

<span class="hljs-function"><span class="hljs-title">cleanup</span></span>() {
    rm -f <span class="hljs-string">"<span class="hljs-variable">$LOCK_FILE</span>"</span>
    log_message <span class="hljs-string">"------------------Script execution completed"</span>
}

<span class="hljs-built_in">trap</span> cleanup EXIT

<span class="hljs-function"><span class="hljs-title">log_message</span></span>() {
    <span class="hljs-built_in">local</span> message=<span class="hljs-string">"<span class="hljs-variable">$1</span>"</span>
    <span class="hljs-built_in">local</span> timestamp=$(date <span class="hljs-string">'+%Y-%m-%d %H:%M:%S'</span>)
    <span class="hljs-built_in">echo</span> <span class="hljs-string">"<span class="hljs-variable">$timestamp</span> - <span class="hljs-variable">$message</span>"</span> | sudo tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>
}

<span class="hljs-function"><span class="hljs-title">check_system_resources</span></span>() {
    <span class="hljs-built_in">local</span> min_space=1000000
    <span class="hljs-built_in">local</span> available_space=$(df /usr -k | awk <span class="hljs-string">'NR==2 {print $4}'</span>)

    <span class="hljs-keyword">if</span> [ <span class="hljs-string">"<span class="hljs-variable">$available_space</span>"</span> -lt <span class="hljs-string">"<span class="hljs-variable">$min_space</span>"</span> ]; <span class="hljs-keyword">then</span>
        log_message <span class="hljs-string">"!!!!! ERROR: Insufficient disk space. Required: 1GB, Available: <span class="hljs-subst">$((available_space / 1024)</span>)MB !!!!!"</span>
        <span class="hljs-built_in">exit</span> 1
    <span class="hljs-keyword">fi</span>
}

<span class="hljs-function"><span class="hljs-title">create_lock</span></span>() {
    <span class="hljs-keyword">if</span> [ -f <span class="hljs-string">"<span class="hljs-variable">$LOCK_FILE</span>"</span> ]; <span class="hljs-keyword">then</span>
        <span class="hljs-built_in">local</span> pid=$(cat <span class="hljs-string">"<span class="hljs-variable">$LOCK_FILE</span>"</span>)
        <span class="hljs-keyword">if</span> <span class="hljs-built_in">kill</span> -0 <span class="hljs-string">"<span class="hljs-variable">$pid</span>"</span> 2&gt;/dev/null; <span class="hljs-keyword">then</span>
            log_message <span class="hljs-string">"*****Another update process is running. Exiting"</span>
            <span class="hljs-built_in">exit</span> 1
        <span class="hljs-keyword">fi</span>
    <span class="hljs-keyword">fi</span>
    <span class="hljs-built_in">echo</span> $$ &gt; <span class="hljs-string">"<span class="hljs-variable">$LOCK_FILE</span>"</span>
}

<span class="hljs-function"><span class="hljs-title">create_log_directory</span></span>() {
    <span class="hljs-keyword">if</span> [ ! -d <span class="hljs-string">"<span class="hljs-variable">$LOG_DIR</span>"</span> ]; <span class="hljs-keyword">then</span>
        log_message <span class="hljs-string">"Creating log directory..."</span>
        <span class="hljs-keyword">if</span> sudo mkdir -p <span class="hljs-string">"<span class="hljs-variable">$LOG_DIR</span>"</span>; <span class="hljs-keyword">then</span>
            sudo chmod 750 <span class="hljs-string">"<span class="hljs-variable">$LOG_DIR</span>"</span>
            log_message <span class="hljs-string">"Log directory created at <span class="hljs-variable">$LOG_DIR</span>"</span>
        <span class="hljs-keyword">else</span>
            log_message <span class="hljs-string">"!!!! ERROR: Failed to create log directory !!!!"</span>
            <span class="hljs-built_in">exit</span> 1
        <span class="hljs-keyword">fi</span>
    <span class="hljs-keyword">fi</span>
}

<span class="hljs-function"><span class="hljs-title">check_internet</span></span>() {
    <span class="hljs-built_in">local</span> retry=0
    <span class="hljs-keyword">while</span> [ <span class="hljs-variable">$retry</span> -lt <span class="hljs-variable">$MAX_RETRIES</span> ]; <span class="hljs-keyword">do</span>
        <span class="hljs-keyword">if</span> ping -c 1 8.8.8.8 &gt;/dev/null 2&gt;&amp;1; <span class="hljs-keyword">then</span>
            <span class="hljs-built_in">return</span> 0
        <span class="hljs-keyword">fi</span>
        retry=$((retry + <span class="hljs-number">1</span>))
        log_message <span class="hljs-string">"Waiting for internet connection (attempt <span class="hljs-variable">$retry</span>/<span class="hljs-variable">$MAX_RETRIES</span>)"</span>
        sleep <span class="hljs-variable">$RETRY_DELAY</span>
    <span class="hljs-keyword">done</span>
    log_message <span class="hljs-string">"!!!!! ERROR: No internet connection !!!!!"</span>
    <span class="hljs-built_in">return</span> 1
}

<span class="hljs-function"><span class="hljs-title">update_arch</span></span>() {
    log_message <span class="hljs-string">"----------Updating Arch Linux------------"</span>

    <span class="hljs-keyword">if</span> ! sudo pacman -Sy --noconfirm 2&gt;&amp;1 | sudo tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>; <span class="hljs-keyword">then</span>
        log_message <span class="hljs-string">"ERROR: Failed to sync package databases"</span>
        <span class="hljs-built_in">return</span> 1
    <span class="hljs-keyword">fi</span>

    <span class="hljs-keyword">if</span> sudo pacman -Su --noconfirm 2&gt;&amp;1 | sudo tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>; <span class="hljs-keyword">then</span>
        log_message <span class="hljs-string">"--------------Arch Linux system updated successfully"</span>
        sudo paccache -r
    <span class="hljs-keyword">else</span>
        log_message <span class="hljs-string">"!!!!! ERROR: Failed to update Arch Linux system !!!!!"</span>
        <span class="hljs-built_in">return</span> 1
    <span class="hljs-keyword">fi</span>
}

<span class="hljs-function"><span class="hljs-title">update_apt</span></span>() {
    log_message <span class="hljs-string">"--------------Updating APT-based system-------------"</span>

    <span class="hljs-built_in">export</span> DEBIAN_FRONTEND=noninteractive

    <span class="hljs-built_in">local</span> retry=0
    <span class="hljs-keyword">while</span> [ <span class="hljs-variable">$retry</span> -lt <span class="hljs-variable">$MAX_RETRIES</span> ]; <span class="hljs-keyword">do</span>
        <span class="hljs-keyword">if</span> sudo apt-get update -y 2&gt;&amp;1 | sudo tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span> &amp;&amp; \
           sudo apt-get upgrade -y 2&gt;&amp;1 | sudo tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span> &amp;&amp; \
           sudo apt-get dist-upgrade -y 2&gt;&amp;1 | sudo tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>; <span class="hljs-keyword">then</span>
            log_message <span class="hljs-string">"---------------APT system updated successfully"</span>
            sudo apt-get autoremove -y
            sudo apt-get clean
            <span class="hljs-built_in">return</span> 0
        <span class="hljs-keyword">fi</span>
        retry=$((retry + <span class="hljs-number">1</span>))
        log_message <span class="hljs-string">"Update attempt <span class="hljs-variable">$retry</span> failed. Retrying in <span class="hljs-variable">$RETRY_DELAY</span> seconds..."</span>
        sleep <span class="hljs-variable">$RETRY_DELAY</span>
    <span class="hljs-keyword">done</span>
    log_message <span class="hljs-string">"!!!!! ERROR: Failed to update APT system after <span class="hljs-variable">$MAX_RETRIES</span> attempts !!!!!"</span>
    <span class="hljs-built_in">return</span> 1
}

<span class="hljs-function"><span class="hljs-title">update_redhat</span></span>() {
    log_message <span class="hljs-string">"--------------Updating RedHat-based system---------------"</span>

    <span class="hljs-built_in">local</span> retry=0
    <span class="hljs-keyword">while</span> [ <span class="hljs-variable">$retry</span> -lt <span class="hljs-variable">$MAX_RETRIES</span> ]; <span class="hljs-keyword">do</span>
        <span class="hljs-keyword">if</span> sudo dnf update -y 2&gt;&amp;1 | sudo tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span> &amp;&amp; \
           sudo dnf upgrade -y 2&gt;&amp;1 | sudo tee -a <span class="hljs-string">"<span class="hljs-variable">$LOG_FILE</span>"</span>; <span class="hljs-keyword">then</span>
            log_message <span class="hljs-string">"---------------RedHat system updated successfully"</span>
            sudo dnf clean all
            <span class="hljs-built_in">return</span> 0
        <span class="hljs-keyword">fi</span>
        retry=$((retry + <span class="hljs-number">1</span>))
        log_message <span class="hljs-string">"Update attempt <span class="hljs-variable">$retry</span> failed. Retrying in <span class="hljs-variable">$RETRY_DELAY</span> seconds..."</span>
        sleep <span class="hljs-variable">$RETRY_DELAY</span>
    <span class="hljs-keyword">done</span>
    log_message <span class="hljs-string">"!!!! ERROR: Failed to update RedHat system after <span class="hljs-variable">$MAX_RETRIES</span> attempts !!!!"</span>
    <span class="hljs-built_in">return</span> 1
}

<span class="hljs-function"><span class="hljs-title">main</span></span>() {
    <span class="hljs-keyword">if</span> [ <span class="hljs-string">"<span class="hljs-variable">$EUID</span>"</span> -ne 0 ]; <span class="hljs-keyword">then</span>
        <span class="hljs-built_in">echo</span> <span class="hljs-string">"This script must be run as root. Please use sudo."</span> &gt;&amp;2
        <span class="hljs-built_in">exit</span> 1
    <span class="hljs-keyword">fi</span>

    create_lock
    create_log_directory

    log_message <span class="hljs-string">"------------------------Starting system update"</span>

    check_system_resources

    <span class="hljs-keyword">if</span> ! check_internet; <span class="hljs-keyword">then</span>
        <span class="hljs-built_in">exit</span> 1
    <span class="hljs-keyword">fi</span>

    <span class="hljs-keyword">if</span> grep -qi <span class="hljs-string">"arch"</span> <span class="hljs-string">"<span class="hljs-variable">$RELEASE_FILE</span>"</span>; <span class="hljs-keyword">then</span>
        update_arch
    <span class="hljs-keyword">elif</span> [ -d /etc/apt ]; <span class="hljs-keyword">then</span>
        update_apt
    <span class="hljs-keyword">elif</span> grep -qiE <span class="hljs-string">"redhat|fedora|centos"</span> <span class="hljs-string">"<span class="hljs-variable">$RELEASE_FILE</span>"</span>; <span class="hljs-keyword">then</span>
        update_redhat
    <span class="hljs-keyword">else</span>
        log_message <span class="hljs-string">"!!!! ERROR: Unsupported or unrecognized Linux distribution !!!!!"</span>
        <span class="hljs-built_in">exit</span> 1
    <span class="hljs-keyword">fi</span>

    log_message <span class="hljs-string">"--------------------------System update process completed"</span>
}

main
</code></pre>
<hr />
<h3 id="heading-setting-up-the-script"><strong>Setting Up the Script</strong></h3>
<ol>
<li><p>Download the script.</p>
</li>
<li><p>Make it executable: <code>chmod +x update.sh</code></p>
</li>
<li><p>Move it to a system directory: <code>sudo mv update.sh /usr/local/bin/update</code></p>
</li>
<li><p>Set proper permissions: <code>sudo chmod 755 /usr/local/bin/update</code></p>
</li>
</ol>
<hr />
<h3 id="heading-automating-the-script-with-cron"><strong>Automating the Script with Cron</strong></h3>
<p>To schedule daily updates, create a cron job:</p>
<ol>
<li><p>Open the cron editor:</p>
<pre><code class="lang-bash"> sudo crontab -e
</code></pre>
</li>
<li><p>Add the following line:</p>
<pre><code class="lang-bash"> 0 0 * * * /usr/<span class="hljs-built_in">local</span>/bin/update
</code></pre>
</li>
</ol>
<p>This schedules the script to run every day at midnight.</p>
<hr />
<h3 id="heading-managing-log-files"><strong>Managing Log Files</strong></h3>
<p>To prevent log file accumulation, create another cron job to delete logs older than 30 days:</p>
<pre><code class="lang-bash">0 0 * * * find /var/<span class="hljs-built_in">log</span>/system_updates/ -<span class="hljs-built_in">type</span> f -mtime +30 -<span class="hljs-built_in">exec</span> rm {} \;
</code></pre>
<hr />
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>By automating your system updates with this script, you can save time and ensure your Linux system is secure and up-to-date. It’s especially useful for servers and critical systems.</p>
<p>Give this script a try, and let me know how it works for you! Got questions or suggestions? Feel free to reach out via email.</p>
<p>Don’t forget to share this with your fellow Linux enthusiasts!</p>
]]></content:encoded></item><item><title><![CDATA[Google BigQuery Interview Question]]></title><description><![CDATA[What is Authorized views and materialized views
Both authorized views and materialized views are powerful tools in BigQuery that offer improved data access and performance. However, they have key differences in their functionalities and applications:...]]></description><link>https://blog.vazid.live/google-bigquery-interview-question</link><guid isPermaLink="true">https://blog.vazid.live/google-bigquery-interview-question</guid><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Tue, 12 Dec 2023 14:54:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715068181483/0751e796-597e-446d-ad4e-0851e28ac39c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-what-is-authorized-views-and-materialized-views"><strong>What is Authorized views and materialized views</strong></h3>
<p>Both <strong>authorized views</strong> and <strong>materialized views</strong> are powerful tools in BigQuery that offer improved data access and performance. However, they have key differences in their functionalities and applications:</p>
<p><strong>Authorized Views:</strong></p>
<ul>
<li><p><strong>Definition:</strong> Automatically created views based on queries with a WHERE clause.</p>
</li>
<li><p><strong>Functionality:</strong> Pre-filter data based on the WHERE clause, improving query performance by reducing scanned data.</p>
</li>
<li><p><strong>Read-only:</strong> Cannot be directly modified.</p>
</li>
<li><p><strong>Benefits:</strong></p>
<ul>
<li><p><strong>Efficient query performance:</strong> Reduces scanned data, leading to faster queries.</p>
</li>
<li><p><strong>Simplified data access:</strong> Simplifies complex queries with pre-filtered data.</p>
</li>
<li><p><strong>Data access control:</strong> Inherits permissions from the underlying table or view, offering easier control.</p>
</li>
</ul>
</li>
</ul>
<p><strong>To create an Authorized View</strong></p>
<p><a target="_blank" href="https://youtu.be/0jC5FrMc79k">Steps to set up Authorized View in BigQuery with Principle of least Privilege | PDE &amp; PCA Concepts</a></p>
<p><strong>Materialized Views:</strong></p>
<ul>
<li><p><strong>Definition:</strong> User-created views that store pre-computed results of a query.</p>
</li>
<li><p><strong>Functionality:</strong> Materialized views physically store data based on the query, significantly speeding up repeated queries with the same logic.</p>
</li>
<li><p><strong>Updatable:</strong> This can be automatically or manually updated based on changes in the underlying data.</p>
</li>
<li><p><strong>Benefits:</strong></p>
<ul>
<li><p><strong>Extreme query performance:</strong> Provide the fastest query response for frequently used queries.</p>
</li>
<li><p><strong>Reduced processing overhead:</strong> Eliminates repeated calculations for frequently used queries.</p>
</li>
<li><p><strong>Scalability:</strong> Enables efficient handling of large datasets for repeated queries.</p>
</li>
</ul>
</li>
<li><p><strong>Example:</strong> SQL</p>
<pre><code class="lang-sql">  <span class="hljs-keyword">CREATE</span> <span class="hljs-keyword">MATERIALIZED</span> <span class="hljs-keyword">VIEW</span> [view_name] <span class="hljs-keyword">AS</span>
  <span class="hljs-keyword">SELECT</span> [column1], [column2], <span class="hljs-keyword">SUM</span>([column3]) <span class="hljs-keyword">AS</span> total_value
  <span class="hljs-keyword">FROM</span> [dataset_name.][table_name]
  <span class="hljs-keyword">GROUP</span> <span class="hljs-keyword">BY</span> [column1], [column2];
</code></pre>
<p>  This query creates a materialized view named <code>[view_name]</code> that stores pre-calculated total values for specific columns and groups.</p>
</li>
</ul>
<p><strong>Choosing the Right Option:</strong></p>
<p>The best choice between authorized views and materialized views depends on your specific needs:</p>
<ul>
<li><p><strong>Frequently used queries:</strong> Materialized views are ideal for queries used repeatedly, significantly improving performance.</p>
</li>
<li><p><strong>Large datasets:</strong> Materialized views offer faster processing for large datasets involved in repeated queries.</p>
</li>
<li><p><strong>Data complexity:</strong> Materialized views are suitable for complex queries with aggregations and calculations.</p>
</li>
<li><p><strong>Data update frequency:</strong> Authorized views are better if the underlying data changes frequently, ensuring the view reflects the latest information.</p>
</li>
<li><p><strong>Data access control:</strong> Authorized views inherit permissions from the base data, simplifying control.</p>
</li>
</ul>
<h3 id="heading-how-to-optimize-table-performance-in-bigquery"><strong>How to optimize table performance in Bigquery</strong></h3>
<p>Optimizing table performance in Google BigQuery involves various strategies, including optimizing table structure, using efficient queries, and leveraging features provided by BigQuery. Here are some tips to enhance table performance:</p>
<p><strong>1. Partition and Cluster Tables:</strong></p>
<ul>
<li><p><strong>Partitioning:</strong> Divide large tables into smaller, more manageable partitions based on a date or timestamp column. This can significantly reduce the amount of data scanned during queries.</p>
</li>
<li><p><strong>Clustering:</strong> Use clustering to organize data within partitions based on one or more columns. Clustering improves query performance by minimizing the amount of data that needs to be read.</p>
</li>
</ul>
<p>Anjan GCP Data Engineering - YouTube has explained both cluster and partition tables</p>
<p><a target="_blank" href="https://youtu.be/96_WpdT4VAg">Big Query Table Partitions with Examples</a></p>
<p><a target="_blank" href="https://youtu.be/L-gXft7Vb_4">Big Query Clustered Tables with Examples</a></p>
<p><strong>2. Use Appropriate Data Types:</strong></p>
<p>Choose the most suitable data types for your columns to minimize storage and improve query performance. Avoid unnecessary use of STRING for numeric or boolean data.</p>
<p><strong>3. Optimize Schema Design:</strong></p>
<p>Normalize or denormalize your schema based on query patterns. Denormalization can reduce the need for JOIN operations and improve query performance.</p>
<p><strong>4. \</strong>Avoid SELECT :<em>*</em></p>
<p>Instead of selecting all columns using <code>SELECT *</code>, explicitly list only the columns you need. This reduces the amount of data read and improves query speed.</p>
<p><strong>5. Control Query Output Size:</strong></p>
<p>Use <code>LIMIT</code> and <code>WHERE</code> clauses to control the amount of data retrieved. Minimize the data scanned by only retrieving the necessary rows.</p>
<p><strong>6. Use Batch Loading for Large Data:</strong></p>
<p>When loading large amounts of data, consider using batch loading. Load data in larger chunks rather than row by row for better performance.</p>
<p><strong>7. Optimize JOIN Operations:</strong></p>
<p>If JOIN operations are necessary, use appropriate JOIN types and conditions. Ensure that the joined columns are properly indexed.</p>
<p><strong>8. Optimize Query Syntax:</strong></p>
<p>Write efficient queries. Review query execution plans using the <code>EXPLAIN</code> statement to identify opportunities for optimization.</p>
<p><strong>9. Materialized Views (Limited):</strong></p>
<p>Consider using views or materialized views to precompute and store aggregated data, but be aware that as of my last knowledge update, BigQuery doesn't have native materialized views. You may need to manually update tables or views periodically.</p>
<p><strong>11. Monitor and Analyze Performance:</strong></p>
<p>Use BigQuery's monitoring tools to analyze query performance. Review the Query Execution Details page in the Cloud Console for insights into query performance.</p>
<h3 id="heading-what-is-the-slot-in-bigquery"><strong>What is the slot in BigQuery</strong></h3>
<p>Slot = resource which is automatically provided by BigQuery</p>
<p><a target="_blank" href="https://youtu.be/ief-Rz0OFmI">BigQuer Slot</a></p>
<h3 id="heading-how-to-query-data-from-the-amazon-s3-in-bigquery-using-bigquery-omni"><strong>How to query data from the Amazon s3 in BigQuery using BigQuery omni</strong></h3>
<p>BigQuery Omni allows querying data directly from external sources like Amazon S3 without needing to load it into BigQuery first. This offers several advantages, including:</p>
<ul>
<li><p><strong>Real-time access:</strong> Analyze data as soon as it's available in S3 without waiting for loading.</p>
</li>
<li><p><strong>Cost-efficiency:</strong> Avoid unnecessary storage costs by querying data directly from the source.</p>
</li>
<li><p><strong>Scalability:</strong> Process massive datasets without worrying about BigQuery storage limitations.</p>
</li>
</ul>
<p>Here's how to query data from Amazon S3 to BigQuery Omni:</p>
<p><strong>1. Set up BigQuery Omni:</strong></p>
<ul>
<li><p>Enable BigQuery Omni in your Google Cloud project.</p>
</li>
<li><p>Create a connection to your Amazon S3 account in BigQuery.</p>
</li>
<li><p>Specify IAM permissions for BigQuery to access your S3 data.</p>
</li>
</ul>
<p><strong>2. Create a BigLake table:</strong></p>
<ul>
<li><p>Define a BigLake table in BigQuery that references the data location in your S3 bucket.</p>
</li>
<li><p>Specify the data schema and format (e.g., CSV, JSON, Avro).</p>
</li>
<li><p>Optionally, configure partitioning and clustering for optimal query performance.</p>
</li>
</ul>
<p><strong>3. Write your SQL query:</strong></p>
<ul>
<li><p>Use standard BigQuery SQL syntax to query the data in the BigLake table.</p>
</li>
<li><p>BigQuery automatically fetches the data from S3 on-demand as needed for the query.</p>
</li>
<li><p>You can perform joins, aggregations, and other operations on the S3 data as if it were stored in BigQuery.</p>
</li>
</ul>
<p><strong>4. Export results:</strong></p>
<ul>
<li>Export the results to other destinations like Google Cloud Storage or BigQuery tables.</li>
</ul>
<p><strong>Here are some points to consider when querying data from Amazon S3 to BigQuery Omni:</strong></p>
<ul>
<li><p><strong>Data access control:</strong> Ensure you have proper IAM permissions granted to BigQuery for accessing your S3 data.</p>
</li>
<li><p><strong>Data format:</strong> BigQuery supports various data formats, but ensure the format used in S3 is compatible with your BigLake table definition.</p>
</li>
<li><p><strong>Data size:</strong> Large datasets may require additional configuration for efficient querying and performance.</p>
</li>
<li><p><strong>Query complexity:</strong> Complex queries may require more resources and potentially incur network costs for data retrieval from S3.</p>
</li>
</ul>
<h3 id="heading-what-are-the-different-processes-to-export-data-from-the-bigquery"><strong>What are the different processes to export data from the BigQuery</strong></h3>
<p>BQ command to load the data from GCS to BigQuery</p>
<pre><code class="lang-bash">bq load --source_format [FORMAT] [DATASET.TABLE] [GCS_URI]
</code></pre>
<h3 id="heading-what-operators-used-airflow-dag-to-load-data-from-gcs-to-bigquery"><strong>What operators used Airflow DAG to load data from GCS to BigQuery</strong></h3>
<p>Airflow offers several operators for loading data from Google Cloud Storage (GCS) to BigQuery. Here are the most commonly used ones:</p>
<p><strong>GCStoBigQueryOperator:</strong></p>
<p>This operator is the most common choice for loading data from GCS to BigQuery.</p>
<ul>
<li><p><strong>Features:</strong></p>
<ul>
<li><p>Supports various data formats like CSV, JSON, Avro, and Parquet.</p>
</li>
<li><p>Offers options to specify schema, compression, and write disposition.</p>
</li>
<li><p>Allows triggering load jobs based on file arrival or time intervals.</p>
</li>
</ul>
</li>
<li><p><strong>Example:</strong></p>
</li>
</ul>
<p>Python</p>
<pre><code class="lang-bash">from airflow import DAG
from airflow.providers.google.cloud.operators.bigquery import GCSToBigQueryOperator

with DAG(
    dag_id=<span class="hljs-string">"gcs_to_bigquery"</span>,
    start_date=datetime(2023, 12, 13),
    schedule_interval=<span class="hljs-string">"@daily"</span>,
) as dag:

    load_data = GCSToBigQueryOperator(
        task_id=<span class="hljs-string">"load_data"</span>,
        bucket=<span class="hljs-string">"my_bucket"</span>,
        source_object=<span class="hljs-string">"data.csv"</span>,
        dataset=<span class="hljs-string">"my_dataset"</span>,
        table=<span class="hljs-string">"my_table"</span>,
    )
</code></pre>
<h3 id="heading-difference-in-delete-table-and-truncate-table-in-bigquery"><strong>Difference in DELETE table and Truncate table in BigQuery</strong></h3>
<p><code>TRUNCATE</code> and <code>DELETE</code> are both SQL commands used to remove data from a table, but they differ in their functionality and the way they accomplish the task.</p>
<ol>
<li><p><strong>DELETE:</strong></p>
<ul>
<li><p>The <code>DELETE</code> statement is used to remove rows from a table based on a specified condition or without any condition (which would delete all rows).</p>
</li>
<li><p>It is a logged operation, meaning that each deleted row is recorded in the transaction log, allowing for the possibility of rolling back the transaction.</p>
</li>
<li><p>The <code>DELETE</code> statement is more flexible and can be used with a <code>WHERE</code> clause to specify a condition for deleting rows.</p>
</li>
</ul>
</li>
</ol>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-sql">    <span class="hljs-keyword">DELETE</span> <span class="hljs-keyword">FROM</span> table_name <span class="hljs-keyword">WHERE</span> condition;
</code></pre>
<ol start="2">
<li><p><strong>TRUNCATE:</strong></p>
<ul>
<li><p>The <code>TRUNCATE</code> statement is used to remove all rows from a table quickly and efficiently.</p>
</li>
<li><p>Unlike <code>DELETE</code>, <code>TRUNCATE</code> is not logged for each individual row; instead, it deallocates the data pages used by the table.</p>
</li>
<li><p><code>TRUNCATE</code> is a faster operation than <code>DELETE</code> because it doesn't generate individual row deletion statements, and it doesn't log individual row deletions.</p>
</li>
</ul>
</li>
</ol>
<p>    <strong>Example:</strong></p>
<pre><code class="lang-sql">    <span class="hljs-keyword">TRUNCATE</span> <span class="hljs-keyword">TABLE</span> table_name;
</code></pre>
<p><strong>Key Differences:</strong></p>
<ul>
<li><p><code>DELETE</code> is a row-level operation, allowing you to delete specific rows based on a condition, whereas <code>TRUNCATE</code> is a table-level operation that removes all rows.</p>
</li>
<li><p><code>DELETE</code> is slower than <code>TRUNCATE</code> because it logs each row deletion, making it suitable for smaller-scale row deletions where logging is necessary.</p>
</li>
<li><p><code>TRUNCATE</code> is more efficient for removing all rows from a table, but it cannot be used when the table is referenced by a foreign key constraint or if it participates in an indexed view.</p>
</li>
</ul>
<p>In summary, use <code>DELETE</code> when you need to selectively remove specific rows or when logging individual row deletions is necessary. Use <code>TRUNCATE</code> when you want to quickly remove all rows from a table</p>
<h1 id="heading-sql">SQL</h1>
<p>One of the main parts of the BigQuery interview</p>
<p><strong>Question:</strong></p>
<p>Write a query to get the below output</p>
<p>Tables</p>
<pre><code class="lang-sql">Employee table:
+<span class="hljs-comment">----+-------+--------+--------------+</span>
| id | name  | salary | departmentId |
+<span class="hljs-comment">----+-------+--------+--------------+</span>
| 1  | Joe   | 70000  | 1            |
| 2  | Jim   | 90000  | 1            |
| 3  | Henry | 80000  | 2            |
| 4  | Sam   | 60000  | 2            |
| 5  | Max   | 90000  | 1            |
+<span class="hljs-comment">----+-------+--------+--------------+</span>
Department table:
+<span class="hljs-comment">----+-------+</span>
| id | name  |
+<span class="hljs-comment">----+-------+</span>
| 1  | IT    |
| 2  | Sales |
+<span class="hljs-comment">----+-------+</span>
</code></pre>
<p>Output</p>
<pre><code class="lang-sql">+<span class="hljs-comment">------------+----------+--------+</span>
| Department | Employee | Salary |
+<span class="hljs-comment">------------+----------+--------+</span>
| IT         | Jim      | 90000  |
| Sales      | Henry    | 80000  |
| IT         | Max      | 90000  |
+<span class="hljs-comment">------------+----------+--------+</span>
</code></pre>
<p><strong>Solution:</strong></p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> d.name <span class="hljs-keyword">as</span> Department,
       e.name <span class="hljs-keyword">as</span> Employee,
       e.salary <span class="hljs-keyword">as</span> Salary
<span class="hljs-keyword">FROM</span> employee e
<span class="hljs-keyword">JOIN</span> department d <span class="hljs-keyword">ON</span> e.departmentId = d.id
<span class="hljs-keyword">WHERE</span> e.salary &gt;= (<span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">MAX</span>(salary) <span class="hljs-keyword">FROM</span> employee)
<span class="hljs-keyword">ORDER</span> <span class="hljs-keyword">BY</span> e.salary <span class="hljs-keyword">DESC</span>;
</code></pre>
<p><strong>Question:</strong></p>
<p>Find the average salary for each department. Display the department name and the average salary.</p>
<p><strong>Solution:</strong></p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> d.name <span class="hljs-keyword">AS</span> Department,
       <span class="hljs-keyword">AVG</span>(e.salary) <span class="hljs-keyword">AS</span> AverageSalary
<span class="hljs-keyword">FROM</span> employee e
<span class="hljs-keyword">JOIN</span> department d <span class="hljs-keyword">ON</span> e.departmentId = d.id
<span class="hljs-keyword">GROUP</span> <span class="hljs-keyword">BY</span> d.name;
</code></pre>
<p><strong>Question:</strong></p>
<p>List the employees who have a salary greater than the average salary across all departments. Display the employee name, salary, and department name.</p>
<pre><code class="lang-sql"><span class="hljs-keyword">WITH</span> AvgSalaryCTE <span class="hljs-keyword">AS</span> (
  <span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">AVG</span>(salary) <span class="hljs-keyword">AS</span> AvgSalary
  <span class="hljs-keyword">FROM</span> employee
)

<span class="hljs-keyword">SELECT</span> e.name <span class="hljs-keyword">AS</span> Employee,
       e.salary <span class="hljs-keyword">AS</span> Salary,
       d.name <span class="hljs-keyword">AS</span> Department
<span class="hljs-keyword">FROM</span> employee e
<span class="hljs-keyword">JOIN</span> department d <span class="hljs-keyword">ON</span> e.departmentId = d.id
<span class="hljs-keyword">CROSS</span> <span class="hljs-keyword">JOIN</span> AvgSalaryCTE
<span class="hljs-keyword">WHERE</span> e.salary &gt; AvgSalary;
</code></pre>
<p><strong>Question</strong>:</p>
<p>Find the department with the highest total salary. Display the department name and the total salary.</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> d.name <span class="hljs-keyword">AS</span> Department,
       <span class="hljs-keyword">SUM</span>(e.salary) <span class="hljs-keyword">AS</span> TotalSalary
<span class="hljs-keyword">FROM</span> employee e
<span class="hljs-keyword">JOIN</span> department d <span class="hljs-keyword">ON</span> e.departmentId = d.id
<span class="hljs-keyword">GROUP</span> <span class="hljs-keyword">BY</span> d.name
<span class="hljs-keyword">ORDER</span> <span class="hljs-keyword">BY</span> TotalSalary <span class="hljs-keyword">DESC</span>
<span class="hljs-keyword">LIMIT</span> <span class="hljs-number">1</span>;
</code></pre>
<p><strong>Question:</strong></p>
<p>List the employees and their salaries in descending order of salary. For employees with the same salary, order them alphabetically by name.</p>
<p><strong>Solution:</strong></p>
<pre><code class="lang-sql">sqlCopy codeSELECT name AS Employee,
       salary AS Salary
FROM employee
ORDER BY salary DESC, name;
</code></pre>
<p><strong>Question:</strong></p>
<p>Find the department(s) where the average salary is greater than a specified value (e.g., 80000). Display the department name and the average salary.</p>
<p><strong>Solution:</strong></p>
<pre><code class="lang-sql">sqlCopy codeSELECT d.name AS Department,
       AVG(e.salary) AS AverageSalary
FROM employee e
JOIN department d ON e.departmentId = d.id
GROUP BY d.name
HAVING AVG(e.salary) &gt; 80000;
</code></pre>
<hr />
<p><strong>Question</strong>:</p>
<p>What is the output of Table A and Table B from Inner Join, Left Outer Join and Full Join?</p>
<p><strong>Table A:</strong></p>
<p>1 2 2 3 4 6 NULL NULL</p>
<p><strong>Table B:</strong></p>
<p>1 1 2 7 8 null</p>
<p><strong>Solution:</strong></p>
<p>Inner Join</p>
<p><strong>Result:</strong></p>
<p>1 2 2 3 4 6 NULL</p>
<p>Left Outer Join</p>
<p><strong>Result:</strong></p>
<p>1 2 2 3 4 6 NULL NULL</p>
<p>Full Join</p>
<p><strong>Result:</strong></p>
<p>1 2 3 4 6 7 8 NULL NULL</p>
<hr />
<p><strong>Question:</strong></p>
<p>Select Concat('a',null,'b'):</p>
<p><strong>Result</strong>:</p>
<p>null</p>
<p>In SQL, concatenating anything with <code>NULL</code> results in <code>NULL</code>. If you want to handle <code>NULL</code> values differently, you can use the <code>COALESCE</code> function or the <code>CONCAT</code> function with multiple arguments.</p>
<pre><code class="lang-sql"><span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">CONCAT</span>(<span class="hljs-string">'a'</span>, <span class="hljs-keyword">COALESCE</span>(<span class="hljs-literal">null</span>, <span class="hljs-string">''</span>), <span class="hljs-string">'b'</span>);
<span class="hljs-comment">-- or</span>
<span class="hljs-keyword">SELECT</span> <span class="hljs-keyword">CONCAT</span>(<span class="hljs-string">'a'</span>, <span class="hljs-keyword">nullif</span>(<span class="hljs-string">''</span>, <span class="hljs-literal">null</span>), <span class="hljs-string">'b'</span>);
</code></pre>
<p>Both of these queries will result in <code>'ab'</code>.</p>
]]></content:encoded></item><item><title><![CDATA[How to do manual data reconciliation using python and SQL]]></title><description><![CDATA[Manual data reconciliation using python and SQL
Data reconciliation is an important part of the data migration. After the completion of data migration we have to compare data from the source data to destination data and here we are going to compare t...]]></description><link>https://blog.vazid.live/how-to-do-manual-data-reconciliation-using-python-and-sql-3165168663e</link><guid isPermaLink="true">https://blog.vazid.live/how-to-do-manual-data-reconciliation-using-python-and-sql-3165168663e</guid><category><![CDATA[data]]></category><category><![CDATA[SQL]]></category><category><![CDATA[Python]]></category><category><![CDATA[bigquery]]></category><category><![CDATA[Google]]></category><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Tue, 24 May 2022 07:54:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1679902683153/87755149-a57a-490d-8457-4935b40a9f6c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Manual data reconciliation using python and SQL</p>
<p><strong>Data reconciliation</strong> is an important part of the data migration. After the completion of data migration we have to compare data from the source data to destination data and here we are going to compare the record count of data which is one of the part of data reconciliation</p>
<p>Whenever we are doing data migration we have to building manifesto file which will have records of when, where and how much data got transferred. Here In this blog we are going to do manual data reconciliation using this manifesto file with these steps before starting reconciliation you have to migrate the data to BigQuery</p>
<ul>
<li><p>We will create manifesto table in BigQuery using terraform,</p>
</li>
<li><p>Load manifesto CSV data to BigQuery using python script</p>
</li>
<li><p>Compare Manifesto record count to BigQuery record count data</p>
</li>
</ul>
<h3 id="heading-1-create-a-manifesto-table-using-terraform">1. Create a manifesto table using terraform</h3>
<p>Here we are going to use the same dataset which we have in created previous blog <a target="_blank" href="https://sudovazid.medium.com/create-dataset-and-table-in-bigquery-using-terraform-c89a5affa61b">Create a BigQuery dataset and table using terraform</a> we will be deploying our manifesto table in the same dataset by using dataset id</p>
<h3 id="heading-maintf">main.tf</h3>
<p>main.tf</p>
<h3 id="heading-variabletf">variable.tf</h3>
<p>variable.tf</p>
<h3 id="heading-terraformtfvars">terraform.tfvars</h3>
<p>terraform.tfvars</p>
<h3 id="heading-2-load-manifesto-data-from-csv-file-to-bigquery-using-python">2. Load manifesto data from CSV file to BigQuery using python</h3>
<p>Here python script we can use to load data from a CSV file in the GCP bucket</p>
<p>load data python script</p>
<p>In this python script need to add your project id and your bucket name</p>
<h3 id="heading-3-compare-bigquery-table-record-count-to-manifesto-record-count">3. Compare BigQuery table record count to manifesto record count</h3>
<p>compare record count using SQL</p>
<p>We need to run this script on BigQuery</p>
<h1 id="heading-output">Output</h1>
<p>After running the SQL query</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679902680128/09d236b2-8b0d-441c-9077-29056fb0fb9a.png" alt /></p>
<p>Record count output</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In this blog, we have created a BigQuery manifesto table, load CSV data to a table and then SQL query to compare the manifesto record to the BigQuery table. Here you can get all the scripts from the GitHub account and an Excel sheet from google Sheets.</p>
<h1 id="heading-reference">Reference</h1>
<h3 id="heading-google-docs">Google Docs</h3>
<p><a target="_blank" href="https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-csv">Loading CSV data from Cloud Storage</a></p>
<h3 id="heading-links">Links</h3>
<p><a target="_blank" href="https://docs.google.com/spreadsheets/d/1Sn1yDQgXTda71C6rtVGIXD-Pz9LtWoJqpuntx72XReo/edit?usp=sharing">tbl_manifesto Sheet1</a></p>
<p><a target="_blank" href="https://github.com/sudovazid/manifesto.git">GitHub - sudovazid/manifesto: This is terraform for manifesto file</a></p>
]]></content:encoded></item><item><title><![CDATA[Create BigQuery dataset and table using terraform]]></title><description><![CDATA[Create BigQuery dataset and table using terraform
This is series BigQuery 101. Here in this blogs I am going to build base of our project by creating BigQuery dataset and tables using terraform.
You can also find this code in GitHub
GitHub - sudovazi...]]></description><link>https://blog.vazid.live/create-dataset-and-table-in-bigquery-using-terraform-c89a5affa61b</link><guid isPermaLink="true">https://blog.vazid.live/create-dataset-and-table-in-bigquery-using-terraform-c89a5affa61b</guid><category><![CDATA[Google Cloud Platform]]></category><category><![CDATA[bigquery]]></category><category><![CDATA[Terraform]]></category><category><![CDATA[GCP]]></category><category><![CDATA[data]]></category><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Sat, 21 May 2022 10:13:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1679902665776/ccfd1999-10e6-413e-8706-dea36eb2382d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Create BigQuery dataset and table using terraform</p>
<p>This is series BigQuery 101. Here in this blogs I am going to build base of our project by creating BigQuery dataset and tables using <strong>terraform</strong>.</p>
<p>You can also find this code in GitHub</p>
<p><a target="_blank" href="https://github.com/sudovazid/gcp_terraform"><strong>GitHub - sudovazid/gcp_terraform: Here is mostly terraform project avaiable to develop on Google…</strong><br />*You can't perform that action at this time. You signed in with another tab or window. You signed out in another tab or…*github.com</a></p>
<p>Now before starting terraform code let’s understand dataset and table in BigQuery</p>
<blockquote>
<p>As per google cloud <strong>a dataset</strong> is contained within a specific <a target="_blank" href="https://cloud.google.com/docs/overview#projects">project</a>. Datasets are top-level containers that are used to organize and control access to your <a target="_blank" href="https://cloud.google.com/bigquery/docs/tables">tables</a> and <a target="_blank" href="https://cloud.google.com/bigquery/docs/views">views</a></p>
</blockquote>
<p>And in this blog we are going to use dataset to migrate store tables data and querying</p>
<p>In datasets folder we will have three terraform files <strong><em>main.tf, variable.tf,</em></strong> and <strong><em>output.tf</em></strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679902667896/fc506e00-fabf-41cc-97b1-6eb78f2222e2.png" alt /></p>
<p>BigQuery dataset table folder</p>
<p>We will create <strong><em>main.tf</em></strong>, <strong><em>variable.tf</em></strong>, <strong><em>output.tf</em></strong> file on terraform folder In this main file we will have dataset resources to deploy BigQuery dataset, variable file will have variable which are going to be use in building dataset and output file to get dataset id after creating dataset which will be used to create tables</p>
<h3 id="heading-datasettf">dataset.tf</h3>
<p>Deploy dataset to BigQuery</p>
<h3 id="heading-variabletf">variable.tf</h3>
<h3 id="heading-outputtf">output.tf</h3>
<p>Now we will start with table folder</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679902669438/157d98a3-9f37-4423-a587-648a16d5576b.png" alt /></p>
<p>BigQuery table folder</p>
<p>Here we have <strong>main.tf</strong> and <strong>variable.tf</strong></p>
<blockquote>
<p>A BigQuery table contains individual records organized in rows. Each record is composed of columns (also called <em>fields</em>).</p>
</blockquote>
<h3 id="heading-tabletf">table.tf</h3>
<h3 id="heading-variabletf-1">variable.tf</h3>
<p>Let’s combine all the resource with building modules.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679902670951/1a8c5ff3-ed61-4b8a-9c0a-b5c72ae34882.png" alt /></p>
<p>BigQuery terraform module structure</p>
<p>Here we have <strong>main.tf</strong> to create terraform modules, <strong>variable.tf</strong> here we declare all the variables and <strong>terraform.tfvars</strong> here we pass the value in variables here we can also pass the provider variables :- <strong><em>project_id, region, zone</em></strong></p>
<h3 id="heading-maintf">main.tf</h3>
<h3 id="heading-variabletf-2">variable.tf</h3>
<h3 id="heading-terraformtfvars">terraform.tfvars</h3>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>Here In this blog we have successfully deployed BigQuery dataset and table using terraform</p>
<h3 id="heading-references"><strong>References</strong></h3>
<p><a target="_blank" href="https://cloud.google.com/bigquery/docs/tables-intro"><strong>Introduction to tables | BigQuery | Google Cloud</strong><br />*A BigQuery table contains individual records organized in rows. Each record is composed of columns (also called…*cloud.google.com</a></p>
<p><a target="_blank" href="https://cloud.google.com/bigquery/docs/datasets-intro"><strong>Introduction to datasets | BigQuery | Google Cloud</strong><br />*This page provides an overview of datasets in BigQuery. A dataset is contained within a specific project. Datasets are…*cloud.google.com</a></p>
]]></content:encoded></item><item><title><![CDATA[Automate File storage security in AWS S3 bucket using Trend Micro Cloud One]]></title><description><![CDATA[This blog is used to create a secure S3 bucket in an AWS account Using Trend Micro file storage security service
Trend Micro Cloud One has lots of products to secure our cloud, container and data centre. Which works for both enterprise data centres a...]]></description><link>https://blog.vazid.live/automate-file-storage-security-in-aws-s3-bucket-using-trend-micro-cloud-one-9d01d64201f4</link><guid isPermaLink="true">https://blog.vazid.live/automate-file-storage-security-in-aws-s3-bucket-using-trend-micro-cloud-one-9d01d64201f4</guid><category><![CDATA[AWS]]></category><category><![CDATA[Security]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[trendmicro]]></category><dc:creator><![CDATA[Sheikh Vazid]]></dc:creator><pubDate>Sun, 23 May 2021 13:50:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1679903380876/7a632c17-0486-468a-ba15-0c46339e80f5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This blog is used to create a secure S3 bucket in an AWS account Using Trend Micro file storage security service</p>
<p>Trend Micro Cloud One has lots of products to secure our cloud, container and data centre. Which works for both enterprise data centres and in the cloud.</p>
<p>Here we are going to see setup file storage security on the S3 bucket if any user will upload the file into the bucket then trend micro FSS service scanned files and show the activity in the trend micro cloud one console and then we can transfer the cleaned file into promote bucket and malicious file into the quarantined bucket.</p>
<h1 id="heading-prerequisite"><strong>Prerequisite</strong></h1>
<p>Trend Micro cloud one access</p>
<p>AWS account admin access</p>
<p>Three Bucket for scan bucket, quarantined bucket and promote bucket</p>
<h1 id="heading-setup"><strong>Setup</strong></h1>
<p>Login into Trend Micro Cloud Console <a target="_blank" href="https://cloudone.trendmicro.com/">Trend Micro Cloud One</a></p>
<p>Select File Security Storage</p>
<p>Click on <strong>Deploy</strong></p>
<p>Select <strong>Scanner Stack and Storage Stack</strong> and Select us-west-2 (Oregon) region</p>
<p>Click on <strong>launch</strong> stack in AWS Account it will create a nested stack</p>
<p>Specify the scan bucket name in the S3BucketToScan parameter</p>
<p>After stack completion, we can see that we have two Stack Scanner Stack and Storage stack.</p>
<p>Copy and paste <strong>ScannerStackManagementRoleARN</strong> in trend micro console Deploy All-in-One-Stack Dialog box</p>
<p>Then add storage stack in trend micro-console, click on <strong>Add Storage</strong></p>
<p>Copy and paste <strong>StorageStackManagementRoleARN</strong> in the trend micro console Deploy Storage Stack Dialog box</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1679903378416/59323e4a-bbe0-4316-b0e7-56f4f513dee9.png" alt /></p>
<p>Trend Micro Cloud One Scan Activity</p>
<h3 id="heading-step-to-setup-post-scan-action-plugin"><strong>Step to setup Post scan Action Plugin</strong></h3>
<p>After completing Storage and Scanner stack we need to create a function to place clean files in one bucket and malicious files in another</p>
<p>Click on <a target="_blank" href="https://console.aws.amazon.com/lambda/home?#/create/app?applicationId=arn:aws:serverlessrepo:us-east-1:415485722356:applications/cloudone-filestorage-plugin-action-promote-or-quarantine">create a link</a> to build lambda function stack ‘serverlessrepo-cloudone-filestorage-plugin-action-promote-or-quarantine’ name</p>
<p>Copy <strong>ScanResultTopicARN</strong> from Storage Stack and paste it into the ScanResultTopicARN parameter</p>
<p>Specify, and in cloud formation stack parameter</p>
<h1 id="heading-test-the-solution"><strong>Test the solution</strong></h1>
<p>Download the Malicious zip file from this <a target="_blank" href="https://www.eicar.org/?page_id=3950">link</a></p>
<p>Upload the Zip file into the Scanned bucket</p>
<p>Upload and clean the file in the scanned bucket</p>
<p>Now we can monitor Scan Activity in Trend Micro Console</p>
<p>Also, files are removed from the scanned bucket to the Quarantined bucket and Promote Bucket</p>
<h1 id="heading-conclusion"><strong>Conclusion</strong></h1>
<p>Trend Micro has which we can use to secure our data in cloud and enterprise data centres. Such as here we have used a file storage security service to secure data in an S3 bucket. So any user can’t upload unwanted files into the bucket.</p>
<h1 id="heading-reference"><strong>Reference</strong></h1>
<h3 id="heading-trend-micro-docs"><strong>Trend Micro Docs</strong></h3>
<p><a target="_blank" href="https://cloudone.trendmicro.com/docs/file-storage-security/gs-sign-in/">Sign in — File Storage Security | Trend Micro Cloud One™ Documentation</a></p>
<h3 id="heading-github-source"><strong>GitHub Source</strong></h3>
<h2 id="heading-cloudone-filestorage-pluginspost-scan-actionsaws-python-promote-or-quarantine-at-master-trendmicrocloudone-filestorage-plugins-githubcomhttpsgithubcomtrendmicrocloudone-filestorage-pluginstreemasterpost-scan-actionsaws-python-promote-or-quarantine"><a target="_blank" href="https://github.com/trendmicro/cloudone-filestorage-plugins/tree/master/post-scan-actions/aws-python-promote-or-quarantine">cloudone-filestorage-plugins/post-scan-actions/aws-python-promote-or-quarantine at master · trendmicro/cloudone-filestorage-plugins (github.com)</a></h2>
<p><a target="_blank" href="https://github.com/trendmicro/cloudone-filestorage-cloudformation-templates/blob/master/templates/FSS-All-In-One.template">cloudone-filestorage-cloudformation-templates/FSS-All-In-One.template at master · trendmicro/cloudone-filestorage-cloudformation-templates (github.com)</a></p>
]]></content:encoded></item></channel></rss>