# Implementation Roadmap - Project Chronicle **Document Version**: 2.2 (Phase 7 Complete) **Last Updated**: October 22, 2025 **Status**: Phases 1-7 & 2B Complete ✅ | Phases 8-10 Ready ⏳ --- ## Overview This roadmap provides step-by-step instructions for implementing **Project Chronicle**, a dual-mode customer success story repository built on Microsoft 365. **Architecture**: 5-component system with verified technology stack **Implementation Time**: 7-8 hours total **Current Progress**: Phases 1-7 & 2B complete (6 hours, 70%) | Phases 8-10 ready (~2 hours remaining) --- ## Technology Stack (Verified Available) | Component | Technology | Status | Evidence | |-----------|-----------|--------|----------| | **Interface** | Microsoft Teams | ✅ Available | Standard M365 | | **AI Agent** | Copilot Studio | ✅ Available | User has access | | **Automation - Flow 1** | Power Automate (SharePoint trigger) | ✅ Available | Standard connector | | **Automation - Flow 2** | Azure AI Foundry Inference | ✅ Available | Verified in search | | **AI Model** | GPT-5-mini | ✅ Deployed | Tested at 95% confidence | | **Storage** | SharePoint Online | ✅ Available | Standard M365 | | **Analytics** | Power BI Service | ✅ Available | Standard M365 | **Azure OpenAI Configuration**: - **Resource**: openai-stories-capstone - **Region**: East US - **Endpoint**: https://openai-stories-capstone.openai.azure.com/ - **Deployment**: gpt-5-mini - **Model Version**: GPT-5 (released August 7, 2025) --- ## Implementation Phases ### ✅ Phase 1: SharePoint Library Setup (45 minutes) - COMPLETE **Status**: Already completed in previous sessions **What Was Built**: - Document library: "Success Stories" - 15 metadata columns configured - Content types defined - Permissions set --- ### ✅ Phase 2: Copilot Studio Agent (1 hour) - COMPLETE **Status**: Already completed in previous sessions **What Was Built**: - Bot: "Story Finder" - Manual submission topic with 7 questions - Knowledge source connected to SharePoint - Semantic search configured --- ### ✅ Phase 3: Teams Integration (30 minutes) - COMPLETE **Status**: Already completed in previous sessions **What Was Built**: - Bot published to Teams - Channel integration tested - User access verified --- ### ✅ Phase 4: Power BI Dashboard (1 hour) - COMPLETE **Status**: Already completed in previous sessions **What Was Built**: - Dashboard with 4 visualizations - Summary cards for key metrics - Coverage gap analysis --- ## ✅ PHASES 5-7: COMPLETE | PHASES 8-10: READY FOR IMPLEMENTATION --- ### ✅ Phase 5: Power Automate Flow 1A - Story ID Generator (45 minutes) - COMPLETE **Status**: ✅ Complete (October 16, 2025) **Purpose**: Automatically enrich SharePoint List items with Story_IDs and metadata **Trigger**: When an item is created in Success Stories **List** **What Was Built**: - Flow 1A: "Manual Story Entry - ID Generator" - SharePoint trigger pointing to Success Stories **List** - Story_ID generation with format: CS-YYYY-NNN - Automatic enrichment with Source = "Manual Entry" **How It Works**: ``` Flow 1B creates List item → Flow 1A detects new item (10-30 sec delay) → Query last Story_ID → Extract number → Increment → Generate new ID (CS-2025-007) → Update List item with ID and Source ``` **Critical Configuration** (Applied in tonight's session): - ✅ Trigger List: "Success Stories List" (NOT "Success Stories" Library) - ✅ Story_ID extraction formula: `@int(last(split(items('Apply_to_each')?['Story_ID'], '-')))` - ✅ Condition field: `triggerBody()?['Story_ID']` (with underscore) - ✅ All SharePoint actions use "Success Stories List" --- #### Key Corrections Applied (October 16, 2025) **Fix 1: Trigger Location** ```yaml BEFORE (Broken): Trigger: When an item is created List Name: Success Stories # ❌ Document Library AFTER (Working): Trigger: When an item is created List Name: Success Stories List # ✅ SharePoint List ``` **Fix 2: Story_ID Extraction Formula** ```javascript BEFORE (Broken): @variables('lastNumber') = items('Apply_to_each')?['Story_ID'] // Returns "CS-2025-001" (string) → Type error AFTER (Working): @variables('lastNumber') = @int(last(split(items('Apply_to_each')?['Story_ID'], '-'))) // Extracts 1 from "CS-2025-001" → Integer ✅ ``` **Fix 3: Condition Field Name** ```yaml BEFORE (Broken): Condition: empty(triggerBody()?['StoryID']) # ❌ No underscore AFTER (Working): Condition: empty(triggerBody()?['Story_ID']) # ✅ With underscore ``` **Fix 4: "Get items" Action** ```yaml Issue: Action had empty name "" causing template errors Solution: Deleted and recreated Action: SharePoint → Get items List Name: Success Stories List # ✅ Not "Success Stories" Order By: Created desc Top Count: 1 Filter: Story_ID ne null ``` --- #### Complete Flow 1A Structure (As Built) ```yaml Flow Name: Manual Story Entry - ID Generator Trigger: Action: When an item is created Site: https://insightonline.sharepoint.com/sites/di_dataai-AIXtrain-Data-Fall2025 List Name: Success Stories List # ✅ Critical: Must be List, not Library Condition (Outer): Expression: empty(triggerBody()?['Story_ID']) # ✅ Only process items without Story_ID If yes (Story_ID is empty): Step 1: Get items (Find last Story_ID) Site: [Same site] List Name: Success Stories List Order By: Created desc Top Count: 1 Filter Query: Story_ID ne null Step 2: Initialize variable lastNumber Name: lastNumber Type: Integer Value: 0 Step 3: Apply to each (body('Get_items')?['value']) Set variable: lastNumber Value: @int(last(split(items('Apply_to_each')?['Story_ID'], '-'))) # ✅ Extracts number from "CS-2025-001" → 1 Step 4: Initialize variable newNumber Name: newNumber Type: Integer Value: @{add(variables('lastNumber'), 1)} # ✅ Increment: 1 + 1 = 2 Step 5: Compose storyIDYear Input: @{formatDateTime(utcNow(), 'yyyy')} # ✅ Extracts year: "2025" Step 6: Compose storyIDNumber Input: @{if(less(variables('newNumber'), 10), concat('00', string(variables('newNumber'))), if(less(variables('newNumber'), 100), concat('0', string(variables('newNumber'))), string(variables('newNumber'))))} # ✅ Formats number: 2 → "002" Step 7: Compose storyID Input: CS-@{outputs('Compose_storyIDYear')}-@{outputs('Compose_storyIDNumber')} # ✅ Combines: "CS-2025-002" Step 8: Update item (Success Stories List) Item ID: @{triggerBody()?['ID']} Story_ID: @{outputs('Compose_storyID')} Source: Manual Entry # ✅ Updates the List item with new Story_ID ``` --- ### ✅ Phase 2B: Bot Automation with Power Automate Flow 1B (1 hour) - COMPLETE **Status**: ✅ Complete (October 15, 2025) **Purpose**: Automate Success Stories List population from bot conversations **Trigger**: Manual trigger (instant cloud flow) **What Was Built**: - Flow 1B: "Bot to SharePoint - Story Submission" - Called by Copilot Studio after 7-question interview - Creates Success Stories List items programmatically - Passes all metadata from bot conversation **How It Works**: ``` User answers 7 questions in Teams → Copilot Studio calls Flow 1B with answers → Flow 1B creates List item → Flow 1A enriches with Story_ID (reuses Phase 5!) ``` **Critical Configuration**: - ✅ Flow Type: Instant cloud flow (manual trigger) - ✅ Input Parameters: 14 parameters (all bot question answers) - ✅ SharePoint Action: Create item in "Success Stories List" - ✅ All fields mapped from Flow 1B inputs **Integration Points**: - ✅ Copilot Studio → Flow 1B: Bot calls flow with conversation data - ✅ Flow 1B → SharePoint: Creates List item - ✅ SharePoint → Flow 1A: Triggers Story_ID generation (Phase 5) - ✅ Result: Fully automated story submission (100% hands-free!) --- ### ✅ Phase 6: Add Knowledge Sources to Copilot Studio (30 minutes) - COMPLETE **Status**: ✅ Complete (October 17, 2025) **Purpose**: Enable multi-source search across Success Stories and Knowledge Library **What Was Built**: - Knowledge Source 1: Success Stories List (structured metadata) - Knowledge Source 2: Success Stories Library (uploaded documents) - Knowledge Source 3: Data & AI Knowledge Library (project documents) ← NEW! **Knowledge Source 3 Configuration**: ```yaml Name: Knowledge Library Site URL: https://insightonline.sharepoint.com/sites/di_dataai Document Library: Shared Documents Folder Path: Knowledge Library/Project Documents Include Subfolders: Yes Status: ✅ Active and indexed ``` **Multi-Source Search Testing** (October 17, 2025): - ✅ Test 1: "Show me project documents" → Multiple sources returned - ✅ Test 2: "Find Azure projects" → Success Stories + Knowledge Library - ✅ Test 3: "Show me win wires" → Win wire documents from Knowledge Library - ✅ Test 4: Client-specific queries → Documents organized by client folders **Tyler Sprau's Knowledge Library Initiative**: - **Owner**: Tyler Sprau (Services Manager - Data & AI) - **Goal**: Consolidate project documents from local devices/OneDrive/Teams → centralized Knowledge Library - **Target Date**: 10/31/2025 - **Structure**: Client Name/Project Name/ - **Contents**: Deliverables, architecture diagrams, user guides, win wires, marketing materials - **Value**: Rich source of success stories for Phase 8 bulk ingestion --- ### ✅ Phase 7: Ingestion Configuration Setup (30 minutes) - COMPLETE **Status**: ✅ Complete (October 22, 2025) **Purpose**: Create configuration list for flexible multi-location support --- #### Ingestion Config List (As Built) **SharePoint List URL**: ``` https://insightonline.sharepoint.com/sites/di_dataai-AIXtrain-Data-Fall2025/Lists/Ingestion%20Config/AllItems.aspx ``` **Columns Created** (6 total): 1. **Title** (default) - Display name for the location 2. **Location_Name** (Text, Required) - Unique identifier 3. **SharePoint_URL** (Hyperlink, Required) - Full site URL 4. **Folder_Path** (Text, Required) - Path to monitor (e.g., /Project Documents) 5. **Structure_Type** (Choice, Required) - Client/Project | Year/Client/Project | Custom 6. **Enabled** (Yes/No, Default: Yes) - Toggle location on/off 7. **Priority** (Number, Default: 1) - Processing order (1 = highest) **Configuration Entry 1** (Created October 22, 2025): ```yaml Title: Data & AI Knowledge Library Location_Name: Data_AI_KB SharePoint_URL: https://insightonline.sharepoint.com/sites/di_dataai Folder_Path: /Project Documents Structure_Type: Client/Project Enabled: Yes Priority: 1 ``` **Item URL**: ``` https://insightonline.sharepoint.com/sites/di_dataai-AIXtrain-Data-Fall2025/Lists/Ingestion%20Config/DispForm.aspx?ID=1 ``` **Why This Matters**: - ✅ **No hardcoded paths** - Flow 2 reads config from this list - ✅ **Easy expansion** - Add new locations by adding rows (no Flow edits) - ✅ **Flexible structures** - Supports different folder organization patterns - ✅ **Toggle control** - Enable/disable locations without code changes - ✅ **Future-ready** - Supports Tyler's initiative as more documents are added **How Flow 2 Will Use This** (Phase 8): ``` Flow 2 triggered by new file → Read Ingestion Config List → Find config for current SharePoint site → Extract Structure_Type (Client/Project) → Parse folder path accordingly → Extract Client = Acme Healthcare, Project = Cloud Migration → Send to Azure OpenAI with parsed metadata → Create Success Stories List item ``` **✅ Phase 7 Complete**: Configuration-driven ingestion ready for Phase 8 --- ### ⏳ Phase 8: Power Automate Flow 2 - Bulk Document Ingestion (80% Complete) **Purpose**: Automated extraction of success stories from project documents **Technology**: Azure AI Foundry Inference connector with GPT-5-mini (verified available) + AI Builder for document text extraction **Progress Summary**: - ✅ Steps 8.1-8.3: COMPLETE (Trigger, config, path parsing) - 45 min done - ⏳ Step 8.4: IN PROGRESS - AI Builder extraction needed - 45 min remaining - ✅ Step 8.5: COMPLETE - Minor body update needed - 5 min - ✅ Steps 8.6-8.7: COMPLETE (Parse JSON, create list item) - 35 min done - ⏳ Steps 8.8-8.9: OPTIONAL (Teams notification, error handling) - ⏳ Step 8.10: PENDING (Testing after 8.4-8.5 complete) **Total Completion**: 80 minutes done out of 100 minutes core work = **80% complete** **Remaining Work**: ~50 minutes (45 min Step 8.4 + 5 min Step 8.5 update) --- #### ✅ Step 8.1: Create Flow 2 (10 minutes) - COMPLETE 1. **Navigate to Power Automate**: - Go to https://make.powerautomate.com - Click **+ Create** → **Automated cloud flow** 2. **Configure Trigger**: - Flow name: `Bulk Document Ingestion - AI Extraction` - Search: `SharePoint` - Select: **"When a file is created"** - Click **Create** 3. **Set Trigger Parameters**: - **Site Address**: [Data & AI Knowledge Library URL] - **Folder Id**: Browse to `/Project Documents` (or root if monitoring all folders) - **Include Nested Folders**: Yes - Click **+ New step** --- #### ✅ Step 8.2: Get Configuration (15 minutes) - COMPLETE **Purpose**: Retrieve ingestion config to determine folder structure parsing 1. **Add Action: Get Items**: - **Action**: SharePoint → Get items - **Site Address**: [Success Stories site URL] - **List Name**: `Ingestion Config` - **Filter Query**: ``` Enabled eq true and SharePoint_URL eq '[current site URL]' ``` - **Top Count**: 1 2. **Add Variable: Initialize configFound**: - **Name**: configFound - **Type**: Boolean - **Value**: ``` greater(length(body('Get_items')?['value']), 0) ``` 3. **Add Condition: Check Config**: - **If no config**: Terminate with error - **If config found**: Continue 4. **Add Variables for Config**: - **structureType**: ``` body('Get_items')?['value'][0]?['Structure_Type']?['Value'] ``` - **locationName**: ``` body('Get_items')?['value'][0]?['Location_Name'] ``` --- #### ✅ Step 8.3: Parse Folder Path (20 minutes) - COMPLETE **Purpose**: Extract Client and Project names from file path 1. **Add Action: Compose - Full Path**: - **Action**: Data Operations → Compose - **Inputs**: `triggerOutputs()?['body/{Path}']` - **Example Output**: `/Project Documents/Acme Healthcare/Cloud Migration 2024/Win Wire.docx` 2. **Add Variable: Initialize pathSegments**: - **Name**: pathSegments - **Type**: Array - **Value**: ``` split(outputs('Compose_-_Full_Path'), '/') ``` 3. **Add Switch: Structure Type**: - **On**: `@{variables('structureType')}` **Case 1: "Client/Project"**: - Set clientName: `@{variables('pathSegments')[2]}` - Set projectName: `@{variables('pathSegments')[3]}` **Case 2: "Year/Client/Project"**: - Set clientName: `@{variables('pathSegments')[3]}` - Set projectName: `@{variables('pathSegments')[4]}` **Default**: - Set clientName: `Unknown` - Set projectName: `@{triggerOutputs()?['body/{Name}']}` --- #### ⏳ Step 8.4: Extract Text from Documents (AI Builder) (45 minutes) - IN PROGRESS **What You Already Built**: - ✅ "Get file content" action (retrieves binary file from SharePoint) **What Needs to Be Added** (45 minutes remaining): Add multi-file-type support using AI Builder for Office document text extraction. 1. **Add Variable: fileExtension** (2 minutes) - **Action**: Initialize variable - **Name**: `fileExtension` - **Type**: String - **Value**: ``` @{toLower(substring(triggerBody()?['{FilenameWithExtension}'], add(lastIndexOf(triggerBody()?['{FilenameWithExtension}'], '.'), 1)))} ``` 2. **Add Switch Control** (5 minutes) - **Action**: Switch - **On**: Select `fileExtension` variable - This will route different file types to different extraction methods 3. **Configure 4 Cases for File Types** (30 minutes total): **CASE 1: txt files (direct text)** (5 minutes) - **Equals**: `txt` - **Add Action**: Compose - **Inputs**: Select **File Content** from "Get file content" - **Rename**: `Compose - extractedText - txt` **CASE 2: docx files (AI Builder extraction)** (8 minutes) - **Equals**: `docx` - **Add Action**: Extract information from documents (AI Builder) - **Document source**: Directly from file content - **Document content**: Select **File Content** from "Get file content" - **Document type**: General documents - **Add Action**: Compose - **Inputs**: Select **Result text** from "Extract information from documents" - **Rename**: `Compose - extractedText - docx` **CASE 3: pptx files (AI Builder extraction)** (8 minutes) - **Equals**: `pptx` - **Add Action**: Extract information from documents (AI Builder) - **Document source**: Directly from file content - **Document content**: Select **File Content** from "Get file content" - **Document type**: General documents - **Add Action**: Compose - **Inputs**: Select **Result text** from "Extract information from documents" - **Rename**: `Compose - extractedText - pptx` **CASE 4: pdf files (AI Builder extraction)** (8 minutes) - **Equals**: `pdf` - **Add Action**: Extract information from documents (AI Builder) - **Document source**: Directly from file content - **Document content**: Select **File Content** from "Get file content" - **Document type**: General documents - **Add Action**: Compose - **Inputs**: Select **Result text** from "Extract information from documents" - **Rename**: `Compose - extractedText - pdf` **DEFAULT CASE: Unsupported file types** (1 minute) - **Add Action**: Terminate - **Status**: Failed - **Message**: `Unsupported file type: @{variables('fileExtension')}` 4. **Why AI Builder?** - Azure OpenAI cannot parse binary Office file formats (.docx, .pptx, .pdf) - AI Builder "Extract information from documents" converts Office files to plain text - Plain text can then be sent to Azure OpenAI for metadata extraction --- #### ✅ Step 8.5: Call Azure OpenAI (30 minutes) - COMPLETE (Body Update Needed - 5 min) **What You Already Built**: - ✅ HTTP action configured with Azure AI Foundry Inference - ✅ System message configured - ✅ User message template configured - ✅ Temperature and max tokens set **What Needs to Be Updated** (5 minutes): Update the "Document Content" line in the User Message to use `coalesce()` function to pick the extracted text from whichever Switch case executed. **Current User Message** (lines 1-8 are correct, only line 9 needs update): ``` Extract success story metadata from this document: Client: @{variables('clientName')} Project: @{variables('projectName')} Filename: @{triggerOutputs()?['body/{Name}']} Document Content: @{body('Get_file_content')} ← OLD - only works for .txt ``` **Updated User Message** (change line 9 only): ``` Extract success story metadata from this document: Client: @{variables('clientName')} Project: @{variables('projectName')} Filename: @{triggerOutputs()?['body/{Name}']} Document Content: @{coalesce(body('Compose_-_extractedText_-_txt'), body('Compose_-_extractedText_-_docx'), body('Compose_-_extractedText_-_pptx'), body('Compose_-_extractedText_-_pdf'))} ← NEW - picks whichever case executed ``` **Why `coalesce()`?** - Only ONE Switch case executes based on file extension - `coalesce()` returns the first non-null value - If .docx file: only `Compose_-_extractedText_-_docx` has a value, others are null - This automatically selects the correct extracted text for any file type **All Other Settings Remain the Same**: - System message: No changes needed - Temperature: 0.3 (already configured) - Max tokens: 500 (already configured) --- #### ✅ Step 8.6: Parse JSON Response (15 minutes) - COMPLETE 1. **Add Action: Parse JSON**: - **Content**: `body('Generate_a_completion')?['choices'][0]?['message']?['content']` - **Schema**: Click "Use sample payload" and paste: ```json { "title": "Example Title", "business_challenge": "Example challenge", "solution": "Example solution", "outcomes": "Example outcomes", "products_used": ["Product 1"], "industry": "Technology", "customer_size": "Enterprise", "deployment_type": "Cloud" } ``` 2. **Add Error Handling**: - Configure action: **Run after** → Include "has failed" - If parsing fails: Log error and use default values --- #### ✅ Step 8.7: Create Success Story List Item (20 minutes) - COMPLETE 1. **Add Action: Create item**: - **Site Address**: [Success Stories site URL] - **List Name**: `Success Stories List` 2. **Map Fields from AI Extraction**: ```yaml Title: @{body('Parse_JSON')?['title']} Client_Name: @{variables('clientName')} Project_Name: @{variables('projectName')} Industry: @{body('Parse_JSON')?['industry']} Business_Challenge: @{body('Parse_JSON')?['business_challenge']} Solution_Delivered: @{body('Parse_JSON')?['solution']} Outcomes_Achieved: @{body('Parse_JSON')?['outcomes']} Customer_Size: @{body('Parse_JSON')?['customer_size']} Deployment_Type: @{body('Parse_JSON')?['deployment_type']} Source: Bulk Document Ingestion Document_URL: @{triggerOutputs()?['body/{Link}']} Products_Used: @{join(body('Parse_JSON')?['products_used'], ', ')} ``` 3. **Note**: Story_ID will be auto-generated by Flow 1A! --- #### ⏳ Step 8.8: Send Teams Notification (15 minutes) - OPTIONAL **Purpose**: Notify team when new story is created 1. **Add Action: Post adaptive card**: - **Connector**: Microsoft Teams - **Team**: [Your team] - **Channel**: [Your channel, e.g., "Success Stories"] 2. **Adaptive Card JSON**: ```json { "type": "AdaptiveCard", "body": [ { "type": "TextBlock", "size": "Medium", "weight": "Bolder", "text": "New Success Story Created" }, { "type": "FactSet", "facts": [ { "title": "Title:", "value": "@{body('Parse_JSON')?['title']}" }, { "title": "Client:", "value": "@{variables('clientName')}" }, { "title": "Project:", "value": "@{variables('projectName')}" }, { "title": "Industry:", "value": "@{body('Parse_JSON')?['industry']}" }, { "title": "Source:", "value": "Bulk Document Ingestion" } ] } ], "actions": [ { "type": "Action.OpenUrl", "title": "View Document", "url": "@{triggerOutputs()?['body/{Link}']}" } ], "$schema": "http://adaptivecards.io/schemas/adaptive-card.json", "version": "1.4" } ``` --- #### ⏳ Step 8.9: Error Handling & Logging (10 minutes) - OPTIONAL 1. **Add Scope Actions**: - Wrap Steps 8.5-8.7 in a **Scope** action - Name: "AI Extraction and List Creation" 2. **Configure Scope**: - **Run after**: This action runs after all previous steps 3. **Add Parallel Branch - Error Handling**: - **Run after**: Scope "has failed" - **Action**: Compose → Error Log - **Content**: ```json { "error_time": "@{utcNow()}", "file_path": "@{triggerOutputs()?['body/{Path}']}", "file_name": "@{triggerOutputs()?['body/{Name}']}", "error_message": "@{result('Scope')[0]['error']['message']}" } ``` 4. **Add Action: Send email** (optional): - To: Your email - Subject: `Flow 2 Error: @{triggerOutputs()?['body/{Name}']}` - Body: `@{outputs('Compose_-_Error_Log')}` --- #### ⏳ Step 8.10: Testing Flow 2 (30 minutes) - PENDING (After Steps 8.4-8.5 Complete) **Test Strategy**: 1. **Test 1: Upload Test Document** - Upload a sample project document to Knowledge Library - Verify Flow 2 triggers - Check AI extraction results - Confirm List item created - Verify Story_ID assigned by Flow 1A 2. **Test 2: Verify Metadata Accuracy** - Compare AI-extracted fields to document content - Verify client/project names parsed correctly - Check products_used array formatting 3. **Test 3: Error Handling** - Upload a non-text file (image, etc.) - Verify error handling works - Check error log output 4. **Test 4: Teams Notification** - Verify notification appears in Teams channel - Check all fields display correctly - Test "View Document" link 5. **Test 5: Power BI Integration** - Confirm new story appears in Power BI dashboard - Verify all visualizations update - Check filtering works **Expected Flow 2 Run Time**: 30-60 seconds per document **✅ Phase 8 Complete**: Bulk ingestion operational with AI metadata extraction --- ### ⏳ Phase 9: End-to-End Testing (1 hour) **Purpose**: Comprehensive testing of all integrated components --- #### Test Scenario 1: Manual Story Submission (15 minutes) **Steps**: 1. Open Teams 2. Start conversation with Story Finder bot 3. Answer all 7 questions 4. Verify: - ✅ Success Stories List item created - ✅ Story_ID assigned (CS-2025-XXX) - ✅ Source = "Manual Entry" - ✅ Power BI dashboard updated --- #### Test Scenario 2: Bulk Document Ingestion (20 minutes) **Steps**: 1. Upload new project document to Knowledge Library 2. Wait 2-3 minutes for Flow 2 3. Verify: - ✅ AI extracted metadata accurately - ✅ Client/Project names parsed correctly - ✅ Success Stories List item created - ✅ Story_ID assigned - ✅ Source = "Bulk Document Ingestion" - ✅ Teams notification sent - ✅ Power BI dashboard updated --- #### Test Scenario 3: Multi-Source Search (15 minutes) **Steps**: 1. Open Teams → Story Finder bot 2. Test queries: - "Find stories about [client name]" - "Show me Azure projects" - "What industries do we serve?" - "Show me win wires" 3. Verify: - ✅ Results from all 3 knowledge sources - ✅ Snippets and links provided - ✅ Source attribution correct --- #### Test Scenario 4: Configuration Changes (10 minutes) **Steps**: 1. Open Ingestion Config list 2. Add new configuration entry (test location) 3. Upload document to new location 4. Verify: - ✅ Flow 2 reads new config - ✅ Folder structure parsed correctly - ✅ Story created successfully --- **✅ Phase 9 Complete**: End-to-end system tested and validated --- ### ⏳ Phase 10: Documentation & Handoff (30 minutes) **Purpose**: Create user guides and admin documentation --- #### Deliverable 1: User Guide (15 minutes) **Content**: 1. How to submit stories via Teams bot 2. How to search for stories 3. How to view Power BI dashboard 4. How to upload documents for bulk ingestion **Format**: SharePoint page or PDF --- #### Deliverable 2: Admin Guide (15 minutes) **Content**: 1. Flow 1A/1B maintenance 2. Flow 2 troubleshooting 3. Copilot Studio updates 4. Power BI dashboard refresh 5. Adding new ingestion locations 6. Azure OpenAI cost monitoring **Format**: SharePoint page or PDF --- **✅ Phase 10 Complete**: Documentation delivered, system ready for production --- ## Next Steps After Phase 10 1. **Monitor Usage** (Week 1-2): - Track bot conversations - Monitor Flow 2 success rate - Review AI extraction accuracy - Collect user feedback 2. **Optimization** (Week 3-4): - Tune AI prompts based on results - Adjust Power BI visualizations - Add new ingestion locations - Expand bot capabilities 3. **Scale** (Month 2+): - Add more knowledge sources - Integrate with other systems - Implement advanced analytics - Automate reporting --- ## Project Status Summary **Completed Phases** (7 of 10): - ✅ Phase 1: SharePoint Library Setup (45 min) - ✅ Phase 2: Copilot Studio Agent (1 hour) - ✅ Phase 3: Teams Integration (30 min) - ✅ Phase 4: Power BI Dashboard (1 hour) - ✅ Phase 5: Flow 1A - Story ID Generator (45 min) - ✅ Phase 2B: Flow 1B - Bot Automation (1 hour) - ✅ Phase 6: Knowledge Sources (30 min) - ✅ **Phase 7: Ingestion Config** (30 min) ← **COMPLETE!** **Remaining Phases** (3 of 10): - ⏳ Phase 8: Flow 2 - Bulk Ingestion (2-3 hours) - ⏳ Phase 9: End-to-End Testing (1 hour) - ⏳ Phase 10: Documentation (30 min) **Progress**: 70% complete (6 hours invested, ~2 hours remaining) --- ## Quick Reference: Connector Names (Verified Available) **For Power Automate Flow Creation**: | Component | Connector Name | Action Name | |-----------|---------------|-------------| | **Flow 1 Trigger** | SharePoint | "When an item is created" | | **Flow 2 Trigger** | SharePoint | "When a file is created" | | **AI Processing** | Azure AI Foundry Inference | "Generate a completion for a conversation" | | **Alternative AI** | Azure OpenAI | "Create a completion" | | **Notifications** | Microsoft Teams | "Post adaptive card in a chat or channel" | --- ## Azure OpenAI Configuration Reference **Endpoint**: https://openai-stories-capstone.openai.azure.com/ **Region**: East US **Deployment**: gpt-5-mini **API Version**: 2024-08-01-preview **Authentication**: API Key **Key 1**: %R5*$PxE@F1j$tZh ⚠️ REGENERATE AFTER PROJECT **Key 2**: 3lzUmyOhZktAWIPI... ⚠️ REGENERATE AFTER PROJECT --- **Document Status**: ✅ Phase 7 Complete - Ready for Phase 8 Implementation **Confidence Level**: 95% (all technology verified, config infrastructure ready) **Last Updated**: October 22, 2025