Getting Started
Learn how to set up and run the Social Crawler Index.
Getting Started
This guide will walk you through the process of setting up and running the Social Crawler Index on your local machine.
Prerequisites
Before you begin, ensure you have the following installed:
- Node.js 18 or higher
- TursoDB instance (local or cloud)
- Twitter account credentials
- Google Cloud API key with Gemini access
Installation
-
Clone the repository:
git clone <repository-url> cd social-crawler-ai/Social-Crawler-Index -
Install dependencies:
npm install -
Create a
.envfile:Create a
.envfile in theSocial-Crawler-Indexdirectory and add the following configuration. Note: Do not commit this file to version control.# Database Configuration TURSO_DB_URL=your_turso_db_url TURSO_DB_AUTH_TOKEN=your_turso_auth_token # Twitter Credentials TWITTER_USERNAME=your_twitter_username TWITTER_PASSWORD=your_twitter_password TWITTER_EMAIL=your_twitter_email # AI/ML Services GEMINI_API_KEY=your_gemini_api_key # Application Settings APP_ENV=development LOG_LEVEL=info TWEET_FETCH_INTERVAL=300000 # 5 minutes in milliseconds MAX_TWEETS_PER_FETCH=10
Usage
Building the Project
To build the TypeScript code, run the following command:
npm run buildManaging Twitter Accounts
You can add a Twitter account to monitor using the following command:
npm run add-account <username>Running the Crawler
To start the monitoring service, run:
npm startThe crawler will then begin its process of monitoring Twitter accounts, analyzing tweets, and storing the results.