In this codelab, you'll learn how to build a real-time profanity filter using Firebase Cloud Functions and Firestore. The function will automatically detect and sanitize inappropriate content in messages as they're added to your database.

What you'll build:

What you'll learn:

Prerequisites:

30-45 minutes

Initialize Firebase Project

First, let's set up our Firebase project structure.

Create a local project folder (recommended) — this keeps your code and Firebase config together. From a terminal run:

mkdir profanity-filter
cd profanity-filter
  1. Create a new Firebase project in the Firebase Console
  2. Enable Firestore Database
  3. Install Firebase CLI if you haven't already:
npm install -g firebase-tools
  1. Login to Firebase:
firebase login
  1. Initialize your project:
firebase init

The setup during firebase init functions,firestore,emulator

Select the following options:

Project Structure

After initialization, your project should look like this:

profanity-filter/
├── firebase.json
├── firestore.rules
├── firestore.indexes.json
└── functions/
    ├── main.py
    └── requirements.txt

Update firebase.json

Configure your Firebase project settings (It might be auto-configured already):

{
  "firestore": {
    "database": "(default)",
    "location": "nam5",
    "rules": "firestore.rules",
    "indexes": "firestore.indexes.json"
  },
  "functions": [
    {
      "source": "functions",
      "codebase": "default",
      "ignore": [
        "venv",
        ".git",
        "firebase-debug.log",
        "firebase-debug.*.log",
        "*.local"
      ],
      "runtime": "python313"
    }
  ]
}

Firebase configuration settings in firebase.json file

Configure the Firebase Emulators (Optional)

You can configure the Functions and Firestore emulators during firebase init so you can run and test your function locally without deploying.

When asked during initialization, select Functions Emulator and Firestore Emulator, enable the Emulator UI, and choose the ports (defaults are shown in the prompts). The initialization writes configuration to firebase.json and .firebaserc and will download the emulator binaries if you confirm.

Example prompts you may see during setup:

Emulator selection and port prompts during firebase init

After choosing options and confirming the download you should see confirmation that initialization completed:

Emulator setup completion and files written to the project

What this does for you:

Python dependencies (requirements.txt)

Update the existing requirements.txt file inside the functions directory with any packages your function needs — the repository already includes this file at functions/requirements.txt.

Example (recommended minimum):

firebase_functions~=0.1.0
firebase-admin
better_profanity

Edit functions/requirements.txt if you need to add or bump packages.

Requirements.txt

Install dependencies locally (optional, recommended for testing)

For fast, safe local development create and activate a virtual environment inside the functions folder, then install dependencies.

Windows (PowerShell):

cd functions
python -m venv .venv
# Activate the venv
.\.venv\Scripts\Activate.ps1
# (Optional) upgrade pip
python -m pip install --upgrade pip
# Install dependencies
pip install -r requirements.txt

macOS / Linux (bash):

cd functions
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
pip install -r requirements.txt

Notes and tips:

Create the Profanity Filter Function

Replace the contents of functions/main.py:

# Welcome to Cloud Functions for Firebase for Python!
# To get started, simply uncomment the below code or create your own.
# Deploy with `firebase deploy`
from firebase_functions import firestore_fn
from firebase_functions.options import set_global_options
from firebase_admin import initialize_app, get_app
from better_profanity import profanity

_app_initialized = False

def ensure_firebase_initialized():
    """Lazy initialization of Firebase Admin SDK."""
    global _app_initialized
    if not _app_initialized:
        try:
            get_app()  # Check if app already exists
        except ValueError:
            initialize_app()  # Initialize if it doesn't exist
        _app_initialized = True


@firestore_fn.on_document_created(document="messages/{pushId}")
def profanity_filter(event: firestore_fn.Event[firestore_fn.DocumentSnapshot | None]) -> None:
    """
    Listens for new messages, checks them for profanity, and adds a
    sanitized version and a flag.
    """
    if event.data is None:
        print("No document data found.")
        return

    original_text = event.data.get("text")

    if not isinstance(original_text, str):
        print(f"Document {event.params['pushId']} is missing 'text' field or field is not a string. No action taken.")
        return

    try:
        # Censor the text using the library
        sanitized_text = profanity.censor(original_text)

        # Check if the text was changed (i.e., if profanity was found)
        was_profane = original_text != sanitized_text

        print(f"Original: '{original_text}', Sanitized: '{sanitized_text}', Was Profane: {was_profane}")

        # Write the new fields back to the document
        event.data.reference.update({
            "sanitized_text": sanitized_text,
            "is_profane": was_profane
        })
        
        print(f"Successfully updated document {event.params['pushId']}")
        
    except Exception as e:
        print(f"Error processing document {event.params['pushId']}: {str(e)}")
        # Optionally, you could write an error flag to the document
        try:
            event.data.reference.update({
                "processing_error": True,
                "error_message": str(e)
            })
        except Exception as update_error:
            print(f"Failed to update document with error info: {str(update_error)}")

Function Breakdown

Let's understand what this function does:

  1. Trigger: @firestore_fn.on_document_created(document="messages/{pushId}") - Triggers when a new document is created in the messages collection
  2. Validation: Checks if the document has a text field that's a string
  3. Processing: Uses the better-profanity library to detect and censor inappropriate content
  4. Update: Adds two new fields to the document:
    • sanitized_text: The cleaned version of the text
    • is_profane: Boolean flag indicating if profanity was found
  5. Error Handling: Catches and logs errors, optionally writing error information back to the document

Before deploying to production, you can test your function locally using the Firebase Emulator Suite. This allows you to validate your function behavior without incurring cloud costs.

Set Up the Emulator

  1. Start the emulators:
firebase emulators:start

This will start both the Firestore and Functions emulators. You should see output similar to:

┌─────────────────────────────────────────────────────────────┐
│ ✔  All emulators ready! It is now safe to connect your app. │
│ i  View Emulator UI at http://127.0.0.1:4000/               │
└─────────────────────────────────────────────────────────────┘

┌────────────────┬────────────────┬─────────────────────────────────┐
│ Emulator       │ Host:Port      │ View in Emulator UI             │
├────────────────┼────────────────┼─────────────────────────────────┤
│ Functions      │ 127.0.0.1:5001 │ http://127.0.0.1:4000/functions │
│ Firestore      │ 127.0.0.1:8080 │ http://127.0.0.1:4000/firestore │
└────────────────┴────────────────┴─────────────────────────────────┘

Functions running in emulator

Test Your Function

  1. Open the Emulator UI: Navigate to http://127.0.0.1:4000/ in your browser
  2. Go to Firestore: Click on the Firestore tab in the emulator UI Firestore emulator interface showing empty database
  3. Create a test document:
    • Click "Start collection"
    • Collection ID: messages
    • Document ID: (auto-generate or specify)
    • Add a field: text (string) with value like "This is a damn test message"
    • Click "Save" Firestore document with original text field containing profanity
  4. Monitor the function execution:
    • Switch to the Functions tab to see logs
    • Go back to Firestore to see the updated document with sanitized_text and is_profane fields Document automatically updated with sanitized text and profanity flag

Benefits of Emulator Testing

Deploy to Firebase

Firebase console billing page showing Blaze plan requirement From your project root directory, (ensure you are not thing the functions directory/folder), deploy the function:

firebase deploy --only functions

You should see output similar to: Terminal output showing function deployment process

=== Deploying to 'your-project-id'...

i  deploying functions
✔  functions: Finished running predeploy script.
i  functions: ensuring required API cloudfunctions.googleapis.com is enabled...
✔  functions: required API cloudfunctions.googleapis.com is enabled
i  functions: ensuring required API cloudbuild.googleapis.com is enabled...
✔  functions: required API cloudbuild.googleapis.com is enabled
i  functions: preparing functions directory for uploading...
i  functions: packaged functions (X KB) for uploading
✔  functions: functions folder uploaded successfully
i  functions: creating Python function profanity_filter...
✔  functions[profanity_filter]: Finished running predeploy script.
✔  functions[profanity_filter]: functions deployed successfully.

✔  Deploy complete!

Firebase Console

  1. Open the Firebase Console
  2. Navigate to Firestore Database
  3. Create a new document in the messages collection:

Firestore console interface for creating a new document

{
  "text": "This is a damn test message",
  "user": "testuser",
  "timestamp": "2024-01-01T12:00:00Z"
}
  1. Watch as the function automatically adds the new fields:

Firestore document showing sanitized_text and is_profane fields after function execution

View Function Logs

Firebase console functions tab showing execution logs and performance metrics

  1. In Firebase Console, go to Functions
  2. Click on your profanity_filter function
  3. View logs to see function execution details

Common Test Cases

Test your function with various inputs:

Congratulations! You've successfully built a real-time profanity filter using Firebase Cloud Functions. Your function now:

✅ Automatically triggers when new messages are added to Firestore ✅ Detects and censors inappropriate content ✅ Adds metadata about content moderation ✅ Handles errors gracefully ✅ Scales automatically with Firebase

What You've Learned

Next Steps

Resources

Common Issues

Function not triggering:

Permission errors:

Dependencies not found:

Timeout errors:

Happy coding! 🚀