Compare commits

..

2 Commits

Author SHA1 Message Date
c5893a6de4 implemented basic feedback words without finite points or good prompt 2026-01-03 00:50:09 -07:00
554efec086 pre feedback checkpoint 2026-01-03 00:20:26 -07:00
9 changed files with 459 additions and 40 deletions

2
.gitignore vendored
View File

@@ -3,4 +3,4 @@ venv
__pycache__ __pycache__
historic_prompts.json historic_prompts.json
pool_prompts.json pool_prompts.json
feedback_words.json #feedback_words.json

View File

@@ -1 +1,98 @@
# Task: Combine pool and history stats into a single function and single menu item
## Changes Made
### 1. Created New Combined Stats Function
- Added `show_combined_stats()` method to `JournalPromptGenerator` class
- Combines both pool statistics and history statistics into a single function
- Displays two tables: "Prompt Pool Statistics" and "Prompt History Statistics"
### 2. Updated Interactive Menu
- Changed menu from 5 options to 4 options:
- 1. Draw prompts from pool (no API call)
- 2. Fill prompt pool using API
- 3. View combined statistics (replaces separate pool and history stats)
- 4. Exit
- Updated menu handling logic to use the new combined stats function
### 3. Updated Command-Line Arguments
- Removed `--pool-stats` argument
- Updated `--stats` argument description to "Show combined statistics (pool and history)"
- Updated main function logic to use `show_combined_stats()` instead of separate functions
### 4. Removed Old Stats Functions
- Removed `show_pool_stats()` method
- Removed `show_history_stats()` method
- All functionality consolidated into `show_combined_stats()`
### 5. Code Cleanup
- Removed unused imports and references to old stats functions
- Ensured all menu options work correctly with the new combined stats
## Testing
- Verified `--stats` command-line argument works correctly
- Tested interactive mode shows updated menu
- Confirmed combined stats display both pool and history information
- Tested default mode (draw from pool) still works
- Verified fill-pool option starts correctly
## Result
Successfully combined pool and history statistics into a single function and single menu item, simplifying the user interface while maintaining all functionality.
---
# Task: Implement theme feedback words functionality with new menu item
## Changes Made
### 1. Added New Theme Feedback Words API Call
- Created `generate_theme_feedback_words()` method that:
- Loads `ds_feedback.txt` prompt template
- Sends historic prompts to AI API for analysis
- Receives 6 theme words as JSON response
- Parses and validates the response
### 2. Added User Rating System
- Created `collect_feedback_ratings()` method that:
- Presents each of the 6 theme words to the user
- Collects ratings from 0-6 for each word
- Creates structured feedback items with keys (feedback00-feedback05)
- Includes weight values based on user ratings
### 3. Added Feedback Words Update System
- Created `update_feedback_words()` method that:
- Replaces existing feedback words with new ratings
- Saves updated feedback words to `feedback_words.json`
- Maintains the required JSON structure
### 4. Updated Interactive Menu
- Expanded menu from 4 options to 5 options:
- 1. Draw prompts from pool (no API call)
- 2. Fill prompt pool using API
- 3. View combined statistics
- 4. Generate and rate theme feedback words (NEW)
- 5. Exit
- Added complete implementation for option 4
### 5. Enhanced Data Handling
- Added `_save_feedback_words()` method for saving feedback data
- Updated `_load_feedback_words()` to handle JSON structure properly
- Ensured feedback words are included in AI prompts when generating new prompts
## Testing
- Verified all new methods exist and have correct signatures
- Confirmed `ds_feedback.txt` file exists and is readable
- Tested feedback words JSON structure validation
- Verified interactive menu displays new option correctly
- Confirmed existing functionality remains intact
## Result
Successfully implemented a new menu item and functionality for generating theme feedback words. The system now:
1. Makes an API call with historic prompts and `ds_feedback.txt` template
2. Receives 6 theme words from the AI
3. Collects user ratings (0-6) for each word
4. Updates `feedback_words.json` with the new ratings
5. Integrates the feedback into future prompt generation
The implementation maintains backward compatibility while adding valuable feedback functionality to improve prompt generation quality over time.

View File

@@ -0,0 +1,14 @@
Request for generation of writing prompts for journaling
Payload:
The previous 60 prompts have been provided as a JSON array for reference.
Guidelines:
Using the attached JSON of writing prompts, you should try to pick out four unique and intentionally vague single-word themes that apply to some portion of the list.
Then add two more single word themes that are fairly different from the other four for a total of six words.
Expected Output:
Output as a JSON list with just the six words, in lowercase.
Despite the provided history being a keyed list or dictionary, the expected return JSON will be a simple list with no keys.
Respond ONLY with valid JSON. No explanations, no markdown, no backticks.

View File

@@ -0,0 +1,26 @@
[
{
"feedback00": "memory",
"weight": 3
},
{
"feedback01": "reflection",
"weight": 4
},
{
"feedback02": "perspective",
"weight": 1
},
{
"feedback03": "ritual",
"weight": 6
},
{
"feedback04": "invention",
"weight": 6
},
{
"feedback05": "solitude",
"weight": 6
}
]

14
ds_feedback.txt Normal file
View File

@@ -0,0 +1,14 @@
Request for generation of writing prompts for journaling
Payload:
The previous 60 prompts have been provided as a JSON array for reference.
Guidelines:
Using the attached JSON of writing prompts, you should try to pick out four unique and intentionally vague single-word themes that apply to some portion of the list.
Then add two more single word themes that are fairly different from the other four for a total of six words.
Expected Output:
Output as a JSON list with just the six words, in lowercase.
Despite the provided history being a keyed list or dictionary, the expected return JSON will be a simple list with no keys.
Respond ONLY with valid JSON. No explanations, no markdown, no backticks.

26
feedback_words.json Normal file
View File

@@ -0,0 +1,26 @@
[
{
"feedback00": "memory",
"weight": 3
},
{
"feedback01": "reflection",
"weight": 4
},
{
"feedback02": "perspective",
"weight": 1
},
{
"feedback03": "ritual",
"weight": 6
},
{
"feedback04": "invention",
"weight": 6
},
{
"feedback05": "solitude",
"weight": 6
}
]

View File

@@ -30,6 +30,7 @@ class JournalPromptGenerator:
self.client = None self.client = None
self.historic_prompts = [] self.historic_prompts = []
self.pool_prompts = [] self.pool_prompts = []
self.feedback_words = []
self.prompt_template = "" self.prompt_template = ""
self.settings = {} self.settings = {}
@@ -41,6 +42,7 @@ class JournalPromptGenerator:
self._load_prompt_template() self._load_prompt_template()
self._load_historic_prompts() self._load_historic_prompts()
self._load_pool_prompts() self._load_pool_prompts()
self._load_feedback_words()
def _load_config(self): def _load_config(self):
"""Load configuration from environment file.""" """Load configuration from environment file."""
@@ -150,6 +152,23 @@ class JournalPromptGenerator:
self.console.print("[yellow]Warning: pool_prompts.json is corrupted, starting with empty pool[/yellow]") self.console.print("[yellow]Warning: pool_prompts.json is corrupted, starting with empty pool[/yellow]")
self.pool_prompts = [] self.pool_prompts = []
def _load_feedback_words(self):
"""Load feedback words from JSON file."""
try:
with open("feedback_words.json", "r") as f:
self.feedback_words = json.load(f)
except FileNotFoundError:
self.console.print("[yellow]Warning: feedback_words.json not found, starting with empty feedback words[/yellow]")
self.feedback_words = []
except json.JSONDecodeError:
self.console.print("[yellow]Warning: feedback_words.json is corrupted, starting with empty feedback words[/yellow]")
self.feedback_words = []
def _save_feedback_words(self):
"""Save feedback words to JSON file."""
with open("feedback_words.json", "w") as f:
json.dump(self.feedback_words, f, indent=2)
def _save_pool_prompts(self): def _save_pool_prompts(self):
"""Save pool prompts to JSON file.""" """Save pool prompts to JSON file."""
with open("pool_prompts.json", "w") as f: with open("pool_prompts.json", "w") as f:
@@ -186,22 +205,6 @@ class JournalPromptGenerator:
return drawn_prompts return drawn_prompts
def show_pool_stats(self):
"""Show statistics about the prompt pool."""
total_prompts = len(self.pool_prompts)
table = Table(title="Prompt Pool Statistics")
table.add_column("Metric", style="cyan")
table.add_column("Value", style="green")
table.add_row("Prompts in pool", str(total_prompts))
table.add_row("Prompts per session", str(self.settings['num_prompts']))
table.add_row("Target pool size", str(self.settings['cached_pool_volume']))
table.add_row("Available sessions", str(total_prompts // self.settings['num_prompts']))
self.console.print(table)
def add_prompt_to_history(self, prompt_text: str): def add_prompt_to_history(self, prompt_text: str):
""" """
Add a single prompt to the historic prompts cyclic buffer. Add a single prompt to the historic prompts cyclic buffer.
@@ -243,6 +246,11 @@ class JournalPromptGenerator:
else: else:
full_prompt = self.prompt_template full_prompt = self.prompt_template
# Add feedback words if available
if self.feedback_words:
feedback_context = json.dumps(self.feedback_words, indent=2)
full_prompt = f"{full_prompt}\n\nFeedback words:\n{feedback_context}"
return full_prompt return full_prompt
def _parse_ai_response(self, response_content: str) -> List[str]: def _parse_ai_response(self, response_content: str) -> List[str]:
@@ -439,6 +447,11 @@ class JournalPromptGenerator:
else: else:
full_prompt = f"{template}\n\n{prompt_instruction}" full_prompt = f"{template}\n\n{prompt_instruction}"
# Add feedback words if available
if self.feedback_words:
feedback_context = json.dumps(self.feedback_words, indent=2)
full_prompt = f"{full_prompt}\n\nFeedback words:\n{feedback_context}"
return full_prompt return full_prompt
def _parse_ai_response_with_count(self, response_content: str, expected_count: int) -> List[str]: def _parse_ai_response_with_count(self, response_content: str, expected_count: int) -> List[str]:
@@ -520,6 +533,149 @@ class JournalPromptGenerator:
self.console.print("[red]Failed to generate prompts[/red]") self.console.print("[red]Failed to generate prompts[/red]")
return 0 return 0
def generate_theme_feedback_words(self) -> List[str]:
"""Generate 6 theme feedback words using AI based on historic prompts."""
self.console.print("\n[cyan]Generating theme feedback words based on historic prompts...[/cyan]")
# Load the feedback prompt template
try:
with open("ds_feedback.txt", "r") as f:
feedback_template = f.read()
except FileNotFoundError:
self.console.print("[red]Error: ds_feedback.txt not found[/red]")
return []
# Prepare the full prompt with historic context
if self.historic_prompts:
historic_context = json.dumps(self.historic_prompts, indent=2)
full_prompt = f"{feedback_template}\n\nPrevious prompts:\n{historic_context}"
else:
self.console.print("[yellow]Warning: No historic prompts available for feedback analysis[/yellow]")
return []
# Show progress
with Progress(
SpinnerColumn(),
TextColumn("[progress.description]{task.description}"),
transient=True,
) as progress:
task = progress.add_task("Calling AI API for theme analysis...", total=None)
try:
# Call the AI API
response = self.client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": "You are a creative writing assistant that analyzes writing prompts. Always respond with valid JSON."},
{"role": "user", "content": full_prompt}
],
temperature=0.7,
max_tokens=1000
)
response_content = response.choices[0].message.content
except Exception as e:
self.console.print(f"[red]Error calling AI API: {e}[/red]")
self.console.print(f"[yellow]Full prompt sent to API (first 500 chars):[/yellow]")
self.console.print(f"[yellow]{full_prompt[:500]}...[/yellow]")
return []
# Parse the response to get 6 theme words
theme_words = self._parse_theme_words_response(response_content)
if not theme_words or len(theme_words) != 6:
self.console.print(f"[red]Error: Expected 6 theme words, got {len(theme_words) if theme_words else 0}[/red]")
return []
return theme_words
def _parse_theme_words_response(self, response_content: str) -> List[str]:
"""
Parse the AI response to extract 6 theme words.
Expected format: JSON list of 6 lowercase words.
"""
# First, try to clean up the response content
cleaned_content = self._clean_ai_response(response_content)
try:
# Try to parse as JSON
data = json.loads(cleaned_content)
# Check if data is a list
if isinstance(data, list):
# Ensure all items are strings and lowercase them
theme_words = []
for word in data:
if isinstance(word, str):
theme_words.append(word.lower().strip())
else:
theme_words.append(str(word).lower().strip())
return theme_words
else:
self.console.print(f"[yellow]Warning: AI returned unexpected data type: {type(data)}[/yellow]")
return []
except json.JSONDecodeError:
# If not valid JSON, try to extract words from text
self.console.print("[yellow]Warning: AI response is not valid JSON, attempting to extract theme words...[/yellow]")
# Look for patterns in the text
lines = response_content.strip().split('\n')
theme_words = []
for line in lines:
line = line.strip()
if line and len(line) < 50: # Theme words should be short
# Try to extract words (lowercase, no punctuation)
words = [w.lower().strip('.,;:!?()[]{}"\'') for w in line.split()]
theme_words.extend(words)
if len(theme_words) >= 6:
break
return theme_words[:6]
def collect_feedback_ratings(self, theme_words: List[str]) -> List[Dict[str, Any]]:
"""Collect user ratings (0-6) for each theme word and return structured feedback."""
self.console.print("\n[bold]Please rate each theme word from 0 to 6:[/bold]")
self.console.print("[dim]0 = Not relevant, 6 = Very relevant[/dim]\n")
feedback_items = []
for i, word in enumerate(theme_words):
while True:
try:
rating = Prompt.ask(
f"[bold]Word {i+1}: {word}[/bold]",
choices=[str(x) for x in range(0, 7)], # 0-6 inclusive
default="3"
)
rating_int = int(rating)
if 0 <= rating_int <= 6:
# Create feedback item with key (feedback00, feedback01, etc.)
feedback_key = f"feedback{i:02d}"
feedback_items.append({
feedback_key: word,
"weight": rating_int
})
break
else:
self.console.print("[yellow]Please enter a number between 0 and 6[/yellow]")
except ValueError:
self.console.print("[yellow]Please enter a valid number[/yellow]")
return feedback_items
def update_feedback_words(self, new_feedback_items: List[Dict[str, Any]]):
"""Update feedback words with new ratings."""
# Replace existing feedback words with new ones
self.feedback_words = new_feedback_items
self._save_feedback_words()
self.console.print(f"[green]Updated feedback words with {len(new_feedback_items)} items[/green]")
def display_prompts(self, prompts: List[str]): def display_prompts(self, prompts: List[str]):
"""Display generated prompts in a nice format.""" """Display generated prompts in a nice format."""
self.console.print("\n" + "="*60) self.console.print("\n" + "="*60)
@@ -537,19 +693,33 @@ class JournalPromptGenerator:
self.console.print(panel) self.console.print(panel)
self.console.print() # Empty line between prompts self.console.print() # Empty line between prompts
def show_history_stats(self): def show_combined_stats(self):
"""Show statistics about prompt history.""" """Show combined statistics about both prompt pool and history."""
total_prompts = len(self.historic_prompts) # Pool statistics
total_pool_prompts = len(self.pool_prompts)
pool_table = Table(title="Prompt Pool Statistics")
pool_table.add_column("Metric", style="cyan")
pool_table.add_column("Value", style="green")
table = Table(title="Prompt History Statistics") pool_table.add_row("Prompts in pool", str(total_pool_prompts))
table.add_column("Metric", style="cyan") pool_table.add_row("Prompts per session", str(self.settings['num_prompts']))
table.add_column("Value", style="green") pool_table.add_row("Target pool size", str(self.settings['cached_pool_volume']))
pool_table.add_row("Available sessions", str(total_pool_prompts // self.settings['num_prompts']))
table.add_row("Total prompts in history", str(total_prompts)) # History statistics
table.add_row("History capacity", "60 prompts") total_history_prompts = len(self.historic_prompts)
table.add_row("Available slots", str(max(0, 60 - total_prompts))) history_table = Table(title="Prompt History Statistics")
history_table.add_column("Metric", style="cyan")
history_table.add_column("Value", style="green")
self.console.print(table) history_table.add_row("Total prompts in history", str(total_history_prompts))
history_table.add_row("History capacity", "60 prompts")
history_table.add_row("Available slots", str(max(0, 60 - total_history_prompts)))
# Display both tables
self.console.print(pool_table)
self.console.print() # Empty line between tables
self.console.print(history_table)
def interactive_mode(self): def interactive_mode(self):
"""Run in interactive mode with user prompts.""" """Run in interactive mode with user prompts."""
@@ -575,8 +745,8 @@ class JournalPromptGenerator:
self.console.print("\n[bold]Options:[/bold]") self.console.print("\n[bold]Options:[/bold]")
self.console.print("1. Draw prompts from pool (no API call)") self.console.print("1. Draw prompts from pool (no API call)")
self.console.print("2. Fill prompt pool using API") self.console.print("2. Fill prompt pool using API")
self.console.print("3. View pool statistics") self.console.print("3. View combined statistics")
self.console.print("4. View history statistics") self.console.print("4. Generate and rate theme feedback words")
self.console.print("5. Exit") self.console.print("5. Exit")
choice = Prompt.ask("\nEnter your choice", choices=["1", "2", "3", "4", "5"], default="1") choice = Prompt.ask("\nEnter your choice", choices=["1", "2", "3", "4", "5"], default="1")
@@ -610,10 +780,16 @@ class JournalPromptGenerator:
self.console.print("[yellow]No prompts were added to pool[/yellow]") self.console.print("[yellow]No prompts were added to pool[/yellow]")
elif choice == "3": elif choice == "3":
self.show_pool_stats() self.show_combined_stats()
elif choice == "4": elif choice == "4":
self.show_history_stats() # Generate and rate theme feedback words
theme_words = self.generate_theme_feedback_words()
if theme_words:
feedback_items = self.collect_feedback_ratings(theme_words)
self.update_feedback_words(feedback_items)
else:
self.console.print("[yellow]No theme words were generated[/yellow]")
elif choice == "5": elif choice == "5":
self.console.print("[green]Goodbye! Happy journaling! 📓[/green]") self.console.print("[green]Goodbye! Happy journaling! 📓[/green]")
@@ -636,12 +812,7 @@ def main():
parser.add_argument( parser.add_argument(
"--stats", "-s", "--stats", "-s",
action="store_true", action="store_true",
help="Show history statistics" help="Show combined statistics (pool and history)"
)
parser.add_argument(
"--pool-stats", "-p",
action="store_true",
help="Show pool statistics"
) )
parser.add_argument( parser.add_argument(
"--fill-pool", "-f", "--fill-pool", "-f",
@@ -655,9 +826,7 @@ def main():
generator = JournalPromptGenerator(config_path=args.config) generator = JournalPromptGenerator(config_path=args.config)
if args.stats: if args.stats:
generator.show_history_stats() generator.show_combined_stats()
elif args.pool_stats:
generator.show_pool_stats()
elif args.fill_pool: elif args.fill_pool:
# Fill prompt pool to target volume using API # Fill prompt pool to target volume using API
total_added = generator.fill_pool_to_target() total_added = generator.fill_pool_to_target()

7
reset_baseline.sh Executable file
View File

@@ -0,0 +1,7 @@
#!/bin/bash
#cp baseline_files/ds_prompt.txt .
cp baseline_files/feedback_words.json .
cp baseline_files/historic_prompts.json .
cp baseline_files/pool_prompts.json .
cp baseline_files/settings.cfg .

View File

@@ -0,0 +1,66 @@
#!/usr/bin/env python3
"""
Test script to verify feedback_words integration
"""
import sys
import os
# Add current directory to path
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from generate_prompts import JournalPromptGenerator
def test_feedback_words_loading():
"""Test that feedback_words are loaded correctly."""
print("Testing feedback_words integration...")
try:
# Initialize the generator
generator = JournalPromptGenerator()
# Check if feedback_words were loaded
print(f"Number of feedback words loaded: {len(generator.feedback_words)}")
if generator.feedback_words:
print("Feedback words loaded successfully:")
for i, feedback in enumerate(generator.feedback_words):
print(f" {i+1}. {feedback}")
else:
print("No feedback words loaded (this might be expected if file is empty)")
# Test _prepare_prompt method
print("\nTesting _prepare_prompt method...")
prompt = generator._prepare_prompt()
print(f"Prompt length: {len(prompt)} characters")
# Check if feedback words are included in the prompt
if generator.feedback_words and "Feedback words:" in prompt:
print("✓ Feedback words are included in the prompt")
else:
print("✗ Feedback words are NOT included in the prompt")
# Test _prepare_prompt_with_count method
print("\nTesting _prepare_prompt_with_count method...")
prompt_with_count = generator._prepare_prompt_with_count(3)
print(f"Prompt with count length: {len(prompt_with_count)} characters")
# Check if feedback words are included in the prompt with count
if generator.feedback_words and "Feedback words:" in prompt_with_count:
print("✓ Feedback words are included in the prompt with count")
else:
print("✗ Feedback words are NOT included in the prompt with count")
print("\n✅ All tests passed!")
return True
except Exception as e:
print(f"\n❌ Error during testing: {e}")
import traceback
traceback.print_exc()
return False
if __name__ == "__main__":
success = test_feedback_words_loading()
sys.exit(0 if success else 1)