<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Zero to Cloud]]></title><description><![CDATA[Practical notes from a Software Engineer transitioning into Cloud and DevOps. Writing about AWS, Terraform, CI/CD, and infrastructure ,one hands-on experiment a]]></description><link>https://blog.lalitbagga.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 01 May 2026 21:22:09 GMT</lastBuildDate><atom:link href="https://blog.lalitbagga.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Breaking Lambda to Learn It: S3 Triggers, Permissions, and Pitfalls]]></title><description><![CDATA[Intro
I wanted to build something simple — upload a file to S3, have Lambda pick it up, log the filename. That's it. But "simple" in AWS always has layers, and somewhere between the IAM console and Cl]]></description><link>https://blog.lalitbagga.com/breaking-lambda-to-learn-it-s3-triggers-permissions-and-pitfalls</link><guid isPermaLink="true">https://blog.lalitbagga.com/breaking-lambda-to-learn-it-s3-triggers-permissions-and-pitfalls</guid><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Lalit Bagga]]></dc:creator><pubDate>Fri, 01 May 2026 19:58:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/25ae8a04-1250-4441-9fac-fff2358bc783.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Intro</h3>
<p>I wanted to build something simple — upload a file to S3, have Lambda pick it up, log the filename. That's it. But "simple" in AWS always has layers, and somewhere between the IAM console and CloudWatch, I realized I didn't actually understand <em>why</em> it worked. So I broke it. On purpose. Twice. And honestly, that's when things got interesting.</p>
<h3>1. The Setup</h3>
<p>I started with the basics: an S3 bucket, a Lambda function, and a trigger connecting the two. Nothing fancy.</p>
<p><strong>Create the S3 bucket</strong></p>
<p>Head to the S3 console and create a new bucket. I kept all the defaults — block public access on, no versioning. Name it something you'll recognize, like <code>lambda-trigger-test-1x</code>.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/5a2d7062-c94d-4195-9f52-a2989fa435a4.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Create the Lambda function</strong></p>
<p>In the Lambda console, create a new function from scratch. I used Python 3.12, and I let AWS create a new execution role with basic Lambda permissions.</p>
<p>Here's the function I used — dead simple:</p>
<pre><code class="language-python">import json

def lambda_handler(event, context):
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        print(f"File uploaded: s3://{bucket}/{key}")

    response = {
        'statusCode': 200,
        'body': json.dumps('Done')
    }
    print(f"Response: {response}")
    return response
</code></pre>
<p><strong>Wire S3 as a trigger</strong></p>
<p>Inside your Lambda function, go to the <strong>Configuration → Triggers</strong> tab and add a trigger. Select S3, pick your bucket, set the event type to <code>PUT</code>, and Press Add. AWS will automatically adding a resource-based policy.</p>
<h3>2. It Works — But Do You Know Why?</h3>
<p>Upload any file to the bucket. Give it 5–10 seconds, then head to <strong>CloudWatch → Logs → Log Management→ Log groups</strong> and find the log group for your function.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/1147f085-55df-449c-be8b-d24895d0f6d9.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/f96997f3-d631-4629-ac88-277162e22f5a.png" alt="" style="display:block;margin:0 auto" />

<p>What you're seeing in the logs is the <code>event</code> object that Lambda receives. The important part of that structure looks like this:</p>
<pre><code class="language-python">{
  "Records": [
    {
      "s3": {
        "bucket": {
          "name": "lambda-trigger-test-1x"
        },
        "object": {
          "key": "test_file.png",
          "size": 1024
        }
      }
    }
  ]
}
</code></pre>
<p><em>Note: The actual event has more fields like eventTime, eventName, userIdentity, etc., but for basic file uploads, you only need</em> <a href="http://bucket.name"><code>bucket.name</code></a> <em>and</em> <code>object.key</code></p>
<p>S3 sends this payload to Lambda every time a file is uploaded. Your function just loops through the records and prints the bucket and key. Clean, readable, it works. But here's the thing —AWS added a resource-based policy without really explaining about what that meant. I should have read that. That was my mistake, and I was about to discover exactly why it matters.</p>
<h3>3. Now Break It</h3>
<p>Here's where it gets good. I went into the Lambda console, clicked on Configuration → Permissions, scrolled down to the Resource-based policy section, and deleted the policy statement that S3 had been given.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/fc66c4a6-0162-432f-b933-038506dc3656.png" alt="" style="display:block;margin:0 auto" />

<p>Then I went back to S3 and uploaded another file. Nothing. No logs. No error. No CloudWatch entry. Complete silence. I waited. I uploaded again. Still nothing. This is the sneaky part of AWS permissions — when S3 doesn't have permission to invoke your Lambda function, it doesn't tell you. It doesn't email you. There's no error in S3. There's nothing in Lambda. The file lands in the bucket and S3 just... shrugs. Why does this happen? S3 triggers work by S3 calling Lambda's Invoke API on your behalf. For that to work, Lambda has to explicitly say "yes, S3 is allowed to call me." That permission lives in the resource-based policy — it's attached to the Lambda function itself, not to any IAM user or role. When you delete it, S3 tries to invoke Lambda, gets a 403, and quietly moves on. No retry, no alert, nothing logged on your end.</p>
<h3>4. Fix It Manually</h3>
<p>Time to put it back. In the Lambda console under Configuration → Permissions, click Add permissions , select AWS Service</p>
<p>Fill it in like this:</p>
<p>- Statement ID: s3-123 (anything works)</p>
<p>- Principal: <a href="http://s3.amazonaws.com">s3.amazonaws.com</a></p>
<p>- Action: lambda:InvokeFunction</p>
<p>- Source ARN: arn:aws:s3:::lambda-trigger-test-1x</p>
<p>- Source account: you account ID. should be seen by click on your account.(XXX-XXXX-XXXX)</p>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/06225cd1-19e9-4af0-b89c-9870d7b62abe.png" alt="" style="display:block;margin:0 auto" />

<p>Save it, go upload a file, and check CloudWatch. You should see your log appear again.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/85c8ff0d-1826-434b-a961-4341c16068db.png" alt="" style="display:block;margin:0 auto" />

<p>This is the resource-based policy doing its job — it controls who can call your Lambda. Think of it as the front door. S3 needed a key, you took it away, and now you gave it back.</p>
<h3>5. Break It Again — Different Way</h3>
<p>This time we test the <strong>execution role</strong> — what Lambda is allowed to DO outbound. This time instead of touching the resource-based policy, I went to IAM → Roles, found the execution role attached to my Lambda function(search for <strong>lambda-function-test-1x</strong>), and Click Add Permissions and select <a href="https://us-east-1.console.aws.amazon.com/iam/home?region=ca-central-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAmazonS3FullAccess">AmazonS3FullAccess</a> policy</p>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/798a1ec2-0c2d-4785-955e-7a885a311657.png" alt="" style="display:block;margin:0 auto" />

<p><em>Note: This is too broad. Lambda can do anything to any S3 bucket in your account — delete buckets, change policies, read everything. This is a security risk.</em></p>
<p><em>we will remove</em> <a href="https://us-east-1.console.aws.amazon.com/iam/home?region=ca-central-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAmazonS3FullAccess">AmazonS3FullAccess</a> and replace it with Create Inline policy. Click on Add permissions and click on Create Inline Policy and select JSON and enter below info and click next</p>
<pre><code class="language-plaintext">{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": "s3:GetObject",
			"Resource": "arn:aws:s3:::lambda-trigger-test-1x/*"
		},
		{
			"Effect": "Allow",
			"Action": "s3:PutObject",
			"Resource": "arn:aws:s3:::lambda-trigger-test-1x/*"
		}
	]
}
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/4b8c6b57-e609-434c-ad7e-1e44151e209d.png" alt="" style="display:block;margin:0 auto" />

<p>Click create Policy and it should be attached.</p>
<p>Lambda can now ONLY read and write to this specific bucket. Nothing else. No other buckets, no delete, no list — just GetObject and PutObject on exactly one bucket. This is called least privilege.</p>
<p>Then I updated my Lambda function to actually do something with the file — read its contents using <mark class="bg-yellow-200 dark:bg-yellow-500/30">boto3</mark>:</p>
<pre><code class="language-python">import boto3

s3_client = boto3.client('s3')

def lambda_handler(event, context):
    for record in event['Records']:
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        
        response = s3_client.get_object(Bucket=bucket, Key=key)
        
        content_type = response['ContentType']
        size = response['ContentLength']
        
        print(f"File uploaded: {key}")
        print(f"Bucket: {bucket}")
        print(f"Content type: {content_type}")
        print(f"File size: {size} bytes")
    
    return {'statusCode': 200}
</code></pre>
<p><img src="align=%22center%22" alt="" /></p>
<p>You should see the logs like this.</p>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/3e52820b-2b40-461b-a593-d7ac611ca6f4.png" alt="" style="display:block;margin:0 auto" />

<p>Now go to IAM -&gt; Roles -&gt; search for <strong>lambda-function-test-1x</strong> and select it</p>
<p>Remove <a href="https://us-east-1.console.aws.amazon.com/iam/home?region=ca-central-1#/roles/details/lambda-function-test-1x-role-ex26xf4g/editPolicy/s3-limited-access?step=addPermissions">s3-limited-access</a> policy by selecting it and selecting remove. it should look like this now</p>
<img src="https://cdn.hashnode.com/uploads/covers/69f0909e10a70b333597b082/dffe281e-1896-4216-9f70-92dbf8995cb8.png" alt="" style="display:block;margin:0 auto" />

<p>Upload a file. This time, you <em>do</em> get a log — but it's not a happy one:</p>
<pre><code class="language-plaintext">[ERROR] ClientError: An error occurred (AccessDenied) when calling 
the GetObject operation: Access Denied
</code></pre>
<p>Why is this different?</p>
<p>Because this time Lambda did get invoked — S3 had permission to call it (resource-based policy is still intact). But once Lambda started running, it tried to reach out to S3 to read the file, and it didn't have permission to do that. That's the execution role — it controls what Lambda can do from inside. When Lambda calls other AWS services (S3, DynamoDB, SQS, anything), it uses the permissions attached to its execution role. Without S3 read access on the role, the function starts, logs the error, and fails.</p>
<h3>6. The Key Lesson</h3>
<p>I've seen this explained in docs a dozen times but it never clicked until I actually saw both failure modes back to back. Here's how I now think about it: What it controlsFailure modeResource-based policyWho can invoke your Lambda (inbound)Silent — nothing happens, nothing is loggedExecution roleWhat your Lambda can do (outbound)Loud — function runs, throws an error, CloudWatch logs it The silent failure is the dangerous one. If S3 can't invoke Lambda, you'll never know unless you're actively monitoring. There's no alarm, no dead-letter queue entry, nothing in S3 event logs by default. You could lose events and spend hours wondering why your pipeline is empty. The loud failure is actually the friendlier one — your function runs, it fails with a clear error, and CloudWatch catches it. Annoying, but fixable in five minutes.</p>
<h3>7. Conclusion</h3>
<p>If I were starting this over, I'd add a CloudWatch alarm on Lambda errors from day one — the silent failure caught me completely off guard, and without it, I wouldn't have known anything was wrong. Next up, I want to explore what happens when you have multiple S3 event types on the same function and whether you can scope the resource-based policy to a specific prefix rather than the whole bucket. There's always another layer.</p>
]]></content:encoded></item></channel></rss>