Pinneaple.io

Google Docs + Pinecone + Claude Code Integration Setup

Keywords: directory-ecosystem-oregon, ai-agent-infrastructure, leverage-ai-platform, mcp-server-development

Phase 1: Google Docs Integration

1. Google Cloud Project Setup

# Install Google Cloud CLI if not already installed
curl https://sdk.cloud.google.com | bash
gcloud init
gcloud auth application-default login

2. Enable Required APIs

gcloud services enable drive.googleapis.com
gcloud services enable docs.googleapis.com
gcloud services enable sheets.googleapis.com

3. Service Account Creation

# Create service account for programmatic access
gcloud iam service-accounts create claude-integration \
    --display-name="Claude Integration Service Account"

# Download credentials
gcloud iam service-accounts keys create ~/credentials.json \
    --iam-account=claude-integration@YOUR-PROJECT-ID.iam.gserviceaccount.com

4. MCP Server for Google Docs

Create mcp-google-docs-server.py:

import asyncio
import json
from typing import Any, Dict, List
from mcp.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
from mcp.server.stdio import stdio_server
from mcp.types import Resource, Tool, TextContent
from googleapiclient.discovery import build
from google.oauth2.service_account import Credentials

class GoogleDocsMCPServer:
    def __init__(self):
        self.credentials_path = "~/credentials.json"
        self.scopes = [
            'https://www.googleapis.com/auth/documents',
            'https://www.googleapis.com/auth/drive'
        ]
        
    async def initialize_services(self):
        creds = Credentials.from_service_account_file(
            self.credentials_path, scopes=self.scopes
        )
        self.docs_service = build('docs', 'v1', credentials=creds)
        self.drive_service = build('drive', 'v3', credentials=creds)

    async def create_document(self, title: str, content: str = "") -> str:
        doc = self.docs_service.documents().create(
            body={'title': title}
        ).execute()
        
        if content:
            requests = [{
                'insertText': {
                    'location': {'index': 1},
                    'text': content
                }
            }]
            self.docs_service.documents().batchUpdate(
                documentId=doc['documentId'],
                body={'requests': requests}
            ).execute()
            
        return doc['documentId']

    async def update_document(self, doc_id: str, content: str):
        # Implementation for updating documents
        pass

    async def search_documents(self, query: str) -> List[Dict]:
        # Implementation for searching documents
        pass

Phase 2: Pinecone Integration

1. Pinecone Setup

pip install pinecone-client openai tiktoken

2. Environment Configuration

# Add to your .env file
PINECONE_API_KEY=your_pinecone_api_key
PINECONE_ENVIRONMENT=your_pinecone_environment
OPENAI_API_KEY=your_openai_api_key

3. MCP Server for Pinecone

Create mcp-pinecone-server.py:

import pinecone
import openai
from typing import List, Dict, Any
import numpy as np

class PineconeMCPServer:
    def __init__(self):
        self.pinecone_key = os.getenv("PINECONE_API_KEY")
        self.pinecone_env = os.getenv("PINECONE_ENVIRONMENT")
        self.openai_key = os.getenv("OPENAI_API_KEY")
        
    async def initialize_pinecone(self):
        pinecone.init(
            api_key=self.pinecone_key,
            environment=self.pinecone_env
        )
        
        # Create index for directory data if it doesn't exist
        if "oregon-directory-index" not in pinecone.list_indexes():
            pinecone.create_index(
                name="oregon-directory-index",
                dimension=1536,  # OpenAI embedding dimension
                metric="cosine"
            )
        
        self.index = pinecone.Index("oregon-directory-index")

    async def embed_text(self, text: str) -> List[float]:
        response = openai.Embedding.create(
            input=text,
            model="text-embedding-ada-002"
        )
        return response['data'][0]['embedding']

    async def store_business_data(self, business_data: Dict):
        # Create embeddings for business information
        text_content = f"{business_data['name']} {business_data['description']} {business_data['category']} {business_data['location']}"
        embedding = await self.embed_text(text_content)
        
        self.index.upsert([{
            'id': business_data['id'],
            'values': embedding,
            'metadata': business_data
        }])

    async def search_similar_businesses(self, query: str, top_k: int = 10):
        query_embedding = await self.embed_text(query)
        results = self.index.query(
            vector=query_embedding,
            top_k=top_k,
            include_metadata=True
        )
        return results['matches']

Phase 3: Claude Code Integration

1. MCP Server Configuration

Create mcp-config.json:

{
  "mcpServers": {
    "google-docs": {
      "command": "python",
      "args": ["mcp-google-docs-server.py"],
      "env": {
        "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/credentials.json"
      }
    },
    "pinecone": {
      "command": "python",
      "args": ["mcp-pinecone-server.py"],
      "env": {
        "PINECONE_API_KEY": "${PINECONE_API_KEY}",
        "PINECONE_ENVIRONMENT": "${PINECONE_ENVIRONMENT}",
        "OPENAI_API_KEY": "${OPENAI_API_KEY}"
      }
    }
  }
}

2. Claude Code Configuration

Update your Claude Code settings to include the MCP servers:

# Add MCP configuration to Claude Code
claude-code config set mcp.servers.google-docs.command "python mcp-google-docs-server.py"
claude-code config set mcp.servers.pinecone.command "python mcp-pinecone-server.py"

Phase 4: Inter-Agent Communication

1. Agent Communication Hub

Create agent-hub.py:

import asyncio
import json
from typing import Dict, List, Any
import aiohttp

class AgentHub:
    def __init__(self):
        self.active_agents = {}
        self.message_queue = asyncio.Queue()
        
    async def register_agent(self, agent_id: str, capabilities: List[str]):
        self.active_agents[agent_id] = {
            'capabilities': capabilities,
            'status': 'active',
            'last_ping': asyncio.get_event_loop().time()
        }
        
    async def route_message(self, message: Dict[str, Any]):
        # Route messages between agents based on capabilities
        required_capability = message.get('requires')
        target_agents = [
            agent_id for agent_id, data in self.active_agents.items()
            if required_capability in data['capabilities']
        ]
        
        for agent_id in target_agents:
            await self.send_to_agent(agent_id, message)
            
    async def send_to_agent(self, agent_id: str, message: Dict[str, Any]):
        # Implementation for sending messages to specific agents
        pass

Phase 5: Directory Project Integration

1. Business Data Pipeline

async def process_directory_data():
    # Load business data from your collected sources
    businesses = load_business_data()  # Your 5000 potential clients
    
    for business in businesses:
        # Store in Pinecone for vector search
        await pinecone_server.store_business_data(business)
        
        # Create Google Doc template for each business
        doc_id = await docs_server.create_document(
            title=f"{business['name']} - Directory Profile",
            content=generate_business_template(business)
        )
        
        business['doc_id'] = doc_id
        
        # Update your database with the doc_id for future reference
        update_business_record(business)

Next Steps

  1. Test Each Integration Individually

    • Google Docs API calls

    • Pinecone vector operations

    • MCP server communication

  2. Deploy MCP Servers

    • Set up on your local system

    • Configure Claude Code to recognize them

  3. Create Agent Workflows

    • Directory creation automation

    • SEO content generation

    • Lead management pipeline

  4. Scale Testing

    • Start with 10 businesses

    • Test the full pipeline

    • Scale to your 96 directory sites

Key Benefits Once Deployed

  • Automated Directory Creation: Claude Code can generate and deploy 1-15 directory sites daily

  • Intelligent Content Management: Vector search across all business data

  • Seamless Documentation: All processes documented in Google Docs automatically

  • Agent Coordination: Multiple AI systems working in concert

  • Scalable Architecture: Ready for your 5000+ client pipeline

This setup gives you the foundation to dominate the Oregon directory market while building your AI agent empire!

{
  "mcpServers": {
    "Sequential Thinking": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-sequential-thinking"
      ]
    },
    "desktop-commander": {
      "command": "npx",
      "args": [
        "-y",
        "@wonderwhy-er/desktop-commander"
      ]
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "C:\\Users\\mikes\\Desktop",
        "C:\\Users\\mikes\\Downloads",
        "C:\\Users\\mikes\\Documents",
        "C:\\Users\\mikes\\My-Developments"
      ]
    },
    "alpha-seo-gemini": {
      "command": "node",
      "args": [
        "C:\\Users\\mikes\\Alpha_SEO_App\\gemini-mcp-integration\\gemini-mcp-server.js"
      ],
      "env": {
        "GEMINI_API_KEY": "${GEMINI_API_KEY}"
      }
    },
    "pinecone-memory": {
      "command": "node",
      "args": [
        "C:\\Users\\mikes\\claude-assistant-helper.js"
      ],
      "env": {
        "PINECONE_API_KEY": "${PINECONE_API_KEY}",
        "PINECONE_INDEX_NAME": "claude-assistant-vectors"
      }
    }
  }
}

Code 

Issues Fixed:

  1. Command Structure: Your alpha-seo-gemini and pinecone-memory servers had command as an array, but MCP expects command to be a string and args to be a separate array.

  2. Security Vulnerability: Your API keys were exposed in plain text! I've changed them to use environment variables with ${VARIABLE_NAME} syntax.

Additional Setup Required:

Create a .env file in your MCP configuration directory:

bash

GEMINI_API_KEY=AIzaSyDzrsoUK6lAfjrstObMgts5tVBCIHPCEPE
PINECONE_API_KEY=pcsk_4m1mA3_UihejdgAMvVCGFkW2nkXW4UAY2GHg2TvG8gRXRKEVTPekycgNmEpv3dnpHrnDmp

OR set them as system environment variables:

cmd

# Windows Command Prompt
set GEMINI_API_KEY=your_key_here
set PINECONE_API_KEY=your_key_here

# Windows PowerShell
$env:GEMINI_API_KEY="your_key_here"
$env:PINECONE_API_KEY="your_key_here"

Verification Steps:

  1. Check if your server files exist:

    • C:\Users\mikes\Alpha_SEO_App\gemini-mcp-integration\gemini-mcp-server.js

    • C:\Users\mikes\claude-assistant-helper.js

  2. Test server startup individually:

cmd

   node "C:\Users\mikes\Alpha_SEO_App\gemini-mcp-integration\gemini-mcp-server.js"
   node "C:\Users\mikes\claude-assistant-helper.js"
  1. Check MCP-Linker integration - it should now be able to connect all your servers properly!

This should get your full agent ecosystem talking - Sequential Thinking for reasoning, Desktop Commander for system control, Filesystem access, your Alpha SEO Gemini integration, and your Pinecone memory system all working together!

Try the corrected config and let me know if any of the servers fail to start! 🚀

MCP Server Troubleshooting & Fixes

Issue Diagnosis

Your MCP config syntax is correct, but servers are failing to start. Let's fix each one:

1. Check Server File Existence

# Check if your custom server files exist
dir "C:\Users\mikes\Alpha_SEO_App\gemini-mcp-integration\gemini-mcp-server.js"
dir "C:\Users\mikes\claude-assistant-helper.js"

2. Test Individual Server Startup

Test Pinecone Server (the failed one)

cd C:\Users\mikes
set PINECONE_API_KEY=pcsk_4m1mA3_UihejdgAMvVCGFkW2nkXW4UAY2GHg2TvG8gRXRKEVTPekycgNmEpv3dnpHrnDmp
set PINECONE_INDEX_NAME=claude-assistant-vectors
node claude-assistant-helper.js

Test Alpha SEO Gemini Server

cd "C:\Users\mikes\Alpha_SEO_App\gemini-mcp-integration"
set GEMINI_API_KEY=AIzaSyDzrsoUK6lAfjrstObMgts5tVBCIHPCEPE
node gemini-mcp-server.js

3. Most Likely Issues & Solutions

Issue A: Missing MCP Server Structure

Your custom servers might not implement proper MCP protocol. Here's a template:

// claude-assistant-helper.js template
#!/usr/bin/env node

import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';

class PineconeMemoryServer {
  constructor() {
    this.server = new Server(
      {
        name: "pinecone-memory",
        version: "0.1.0",
      },
      {
        capabilities: {
          tools: {},
          resources: {},
        },
      }
    );
    
    this.setupHandlers();
  }

  setupHandlers() {
    // Add your Pinecone integration logic here
    this.server.setRequestHandler('tools/list', async () => {
      return {
        tools: [
          {
            name: "search_memory",
            description: "Search Pinecone memory",
            inputSchema: {
              type: "object",
              properties: {
                query: { type: "string" }
              }
            }
          }
        ]
      };
    });
  }

  async run() {
    const transport = new StdioServerTransport();
    await this.server.connect(transport);
    console.error("Pinecone Memory MCP server running on stdio");
  }
}

const server = new PineconeMemoryServer();
server.run().catch(console.error);

Issue B: Missing Dependencies

# In your server directories, install MCP SDK
cd "C:\Users\mikes\Alpha_SEO_App\gemini-mcp-integration"
npm install @modelcontextprotocol/sdk

cd C:\Users\mikes
npm install @modelcontextprotocol/sdk @pinecone-database/pinecone

Issue C: Environment Variables

Create .env files in each server directory:

C:\Users\mikes.env

PINECONE_API_KEY=pcsk_4m1mA3_UihejdgAMvVCGFkW2nkXW4UAY2GHg2TvG8gRXRKEVTPekycgNmEpv3dnpHrnDmp
PINECONE_INDEX_NAME=claude-assistant-vectors

C:\Users\mikes\Alpha_SEO_App\gemini-mcp-integration.env

GEMINI_API_KEY=AIzaSyDzrsoUK6lAfjrstObMgts5tVBCIHPCEPE

4. Quick Fix - Use Working Alternatives

If your custom servers need rebuilding, temporarily replace them with working ones:

{
  "mcpServers": {
    "Sequential Thinking": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-sequential-thinking"]
    },
    "desktop-commander": {
      "command": "npx", 
      "args": ["-y", "@wonderwhy-er/desktop-commander"]
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y", 
        "@modelcontextprotocol/server-filesystem",
        "C:\\Users\\mikes\\Desktop",
        "C:\\Users\\mikes\\Downloads", 
        "C:\\Users\\mikes\\Documents",
        "C:\\Users\\mikes\\My-Developments"
      ]
    },
    "memory": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-memory"]
    }
  }
}

5. Debugging Steps

  1. Click "Open Logs Folder" in Claude settings to see detailed error messages

  2. Test each server individually with the commands above

  3. Check for missing files or broken paths

  4. Verify Node.js version - MCP requires Node 18+

6. Next Steps

Once we identify the specific errors:

  • Fix missing dependencies

  • Rebuild custom servers with proper MCP protocol

  • Update environment variables

  • Test integration step by step

What do you see when you run the individual server tests? The error messages will tell us exactly what needs to be fixed!

I recently connected with an old friend who works for a successful company that is interested in setting up and deeply integrating AI into their business. They are a supplement company primarily selling online and they also do a substantial amount of business over the phone. They are an ideal candidate for implement the AI workflow, efficiency and integrated AI solutions I have been building my company to be albe to provide. Please help me clearly define my brand and what we offer so that we can build our flagship home website and demonstrate the massive value we can add to the company novexbiotech.com who also owns serovital.com as well as other subsidiarie