Getting Started

Learn how to set up and run the Social Crawler Index.

Getting Started

This guide will walk you through the process of setting up and running the Social Crawler Index on your local machine.

Prerequisites

Before you begin, ensure you have the following installed:

  • Node.js 18 or higher
  • TursoDB instance (local or cloud)
  • Twitter account credentials
  • Google Cloud API key with Gemini access

Installation

  1. Clone the repository:

    git clone <repository-url>
    cd social-crawler-ai/Social-Crawler-Index
  2. Install dependencies:

    npm install
  3. Create a .env file:

    Create a .env file in the Social-Crawler-Index directory and add the following configuration. Note: Do not commit this file to version control.

    # Database Configuration
    TURSO_DB_URL=your_turso_db_url
    TURSO_DB_AUTH_TOKEN=your_turso_auth_token
    
    # Twitter Credentials
    TWITTER_USERNAME=your_twitter_username
    TWITTER_PASSWORD=your_twitter_password
    TWITTER_EMAIL=your_twitter_email
    
    # AI/ML Services
    GEMINI_API_KEY=your_gemini_api_key
    
    # Application Settings
    APP_ENV=development
    LOG_LEVEL=info
    TWEET_FETCH_INTERVAL=300000 # 5 minutes in milliseconds
    MAX_TWEETS_PER_FETCH=10

Usage

Building the Project

To build the TypeScript code, run the following command:

npm run build

Managing Twitter Accounts

You can add a Twitter account to monitor using the following command:

npm run add-account <username>

Running the Crawler

To start the monitoring service, run:

npm start

The crawler will then begin its process of monitoring Twitter accounts, analyzing tweets, and storing the results.

On this page