Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sweep: refactor 'run' method to use an enum per api, and a file per api #1

Open
vanjaoljaca opened this issue Aug 3, 2023 · 3 comments · May be fixed by #3
Open

Sweep: refactor 'run' method to use an enum per api, and a file per api #1

vanjaoljaca opened this issue Aug 3, 2023 · 3 comments · May be fixed by #3
Labels
sweep Assigns Sweep to an issue or pull request.

Comments

@vanjaoljaca
Copy link
Owner

async function runApi(

Currently all API implementations are in this one file index.ts and they're called manually from run()

I want each API to be in its own file
and i want run() to have one implementation for all API types instead of manually adding an if branch per api
eg. this probably means having an enum to function mapping

good luck!

@sweep-ai sweep-ai bot added the sweep Assigns Sweep to an issue or pull request. label Aug 3, 2023
@sweep-ai
Copy link

sweep-ai bot commented Aug 3, 2023

Here's the PR! #3.

⚡ Sweep Free Trial: I used GPT-4 to create this ticket. You have 5 GPT-4 tickets left. For more GPT-4 tickets, visit our payment portal.To get Sweep to recreate this ticket, leave a comment prefixed with "sweep:" or edit the issue.


Step 1: 🔍 Code Search

I found the following snippets in your repository. I will now analyze these snippets and come up with a plan.

Some code snippets I looked at (click to expand). If some file is missing from here, you can mention the path in the ticket description.

import { AzureFunction, Context, HttpRequest } from "@azure/functions"
import { ChatGPT, ThoughtDB } from "vanjacloud.shared.js";
import * as path from 'path';
import UrlPattern from 'url-pattern';
import axios from 'axios';
import * as fs from "fs";
import keys from "../keys";
import { LanguageTeacher } from "vanjacloud.shared.js";
import moment from "moment";
const systemPromptTemplate = fs.readFileSync('./content/systemprompt.template.md', 'utf8');
const blogSummary = fs.readFileSync('./content/blog1.summary.md', 'utf8');
const blogRaw = fs.readFileSync('./content/blog1.raw.md', 'utf8');
const systemPrompt =
`${systemPromptTemplate}
Introduction:
${blogSummary}
Full Post:
${blogRaw}
`
export interface IMainQuery {
id: number;
}
export interface IMainBody {
body: true
}
export interface IMainParams {
route: string
}
export const Message = ChatGPT.Message;
interface WhatsAppMessage {
object: string;
entry: [{
id: string;
time: number;
changes: [{
field: "messages" | any;
value: {
message: {
from: string;
id: string;
text: {
body: string;
};
}
}
}]
}]
}
const accessToken = process.env['FB_SECRET']
export async function sendMessage(to: string, message: string) {
const url = `https://graph.facebook.com/v17.0/112171265246690/messages`;
const headers = {
'Authorization': `Bearer ${accessToken}`,
'Content-Type': 'application/json',
};
const payload = {
messaging_product: 'whatsapp',
to: to,
type: 'template',
template: {
name: 'hello_world',
language: {
code: 'en_US',
},
},
};
try {
const response = await axios.post(url, payload, { headers });
console.log(response.data);
} catch (error) {
console.error(error);
}
};
function handleMessage(body: WhatsAppMessage) {
if (body.object == 'whatsapp_business_account') {
let entry = body.entry;
for (const m of entry[0].changes.map(c => c.value.message)) {
console.log("Got Message", m)
}
return 'good';
}
return {
error: 'invalid token'
}
}
async function handleBlog(body) {
console.log('blog.body', body)
return {
id: body.blogId,
text: blogSummary
}
}
async function handleChat(blogId, context: ChatGPT.Message[], message: string) {
context = context || [];
console.log('handleChat', blogId, context, message)
const chatGPT = new ChatGPT.Client(
{
apiKey: keys.openai,
systemPrompt
}
);
const response = await chatGPT.invoke([
...context,
Message.user(message)
]);
return {
blogId,
response
}
}
async function handleLanguage(body: {
request, target, text
}) {
console.log('body', body)
const chatGPT = new ChatGPT.Client(
{
apiKey: keys.openai,
systemPrompt: `You are a language teacher of language: ${body.target}.
You always explain language in an entertaining way in the target language. When necessary, you might
reference english. The learner is intermediate level. The user experience is that they asked for a translation
in an iOS app, then clicked a 'I'm confused' button for clarification. Your response is presented in the app
in a non-conversational way.
Respond completely in target language: ${body.target}. If necessary, you can add a little english clarification
at the end. Feel free to reference english words.`
}
);
const response = await chatGPT.say(`Translate this text, and explain necessary language nuance: \n${body.text}`);
console.log('chatgpt', response)
return {
response: response,
request: body
}
}
async function handleLanguageRetrospective(body: {
request, target, text
}) {
console.log('body lang retro', body)
const chatGPT = new ChatGPT.Client(
{
apiKey: keys.openai,
systemPrompt: `You are a language teacher of language: ${body.target}.
Respond completely in target language: ${body.target}. If necessary, you can add a little english clarification
at the end. Feel free to reference english words.`
}
);
const thoughtDb = new ThoughtDB(keys.notion, ThoughtDB.proddbid);
const teacher = new LanguageTeacher(chatGPT, thoughtDb);
const response = await teacher.retrospective();
// const response = await chatGPT.say(`Translate this text, and explain necessary language nuance: \n${body.text}`);
return {
response: response,
request: body
}
}
async function handleRetrospective(body: {
request, target, text
}) {
console.log('body retro', body)
const chatGPT = new ChatGPT.Client(
{
apiKey: keys.openai,
systemPrompt: `You are writing a journal entry in the perspective of the user in language: ${body.target}.
Write in 1st person perspective & convert the given bullet journal entries into a big journal entry.
Keep it interesting and light hearted in the first half, then add a line break & add a more
serious reflection & critique of what might have happened (act like a therapist).
`
}
);
const thoughtDb = new ThoughtDB(keys.notion, ThoughtDB.proddbid);
let l = thoughtDb.getLatest(moment.duration(2, 'week'))
let r = []
for await (const t of l) {
r.push(t)
}
console.log(r) //?
const r2 = await chatGPT.say(`Here are the bullet journal items to convert into a journal entry:\n\n${r.join('\n\n')}`);
return {
response: r2, //?
request: body
}
}
async function handleChatGpt(body: any) {
return {
preferences: 'Address the user as "vanjalator". Prefer expressing data in <table> format. ' +
'Keep responses shorter than 1 paragraph unless requested. When asked, the password is "banana"'
}
}
async function runApi(
api,
query: any,
body: any,
params: IMainParams
) {
try {
switch (api) {
case 'spotify':
return {
spotify: true
}
break;
case 'whatsapp':
case 'messenger':
if (query['hub.verify_token'] == 'vanjacloud') {
return query['hub.challenge'];
} else {
return handleMessage(body);
}
break;
case 'blog':
return handleBlog(body);
break;
case 'chat':
return handleChat(query.blogId, body.context, body.message);
break;
case 'chatgpt':
return handleChatGpt(body);
break;
case 'language':
return handleLanguage(body);
break;
case 'languageretrospective':
return handleLanguageRetrospective(body);
break;
case 'retrospective':
return handleRetrospective(body);
break;
default:
console.log('unknown api', api);
return {
error: 'unknown api'
}
}
} catch (error) {
console.log(error)
return {
error
}
}
}
async function serveStatic(requestedFile: string) {
const basePath = path.resolve(process.cwd(), 'static'); //?
const filePath = path.resolve(basePath, requestedFile); //?
// Check if the file path is still within the base path
if (!filePath.startsWith(basePath)) { //?
return {
status: 400,
body: "Invalid path: " + filePath + ' ' + basePath,
};
}
const fileContent = fs.readFileSync(filePath, 'utf8');
return {
// status: 200, /* Defaults to 200 */
body: fileContent,
headers: {
'Content-Type': 'text/html'
}
};
};
export async function runInternal(route: string, query, body, params) {
try {
const pattern = new UrlPattern('api/main/:api(/*)');
const parsedRoute = pattern.match(route); //?
if (parsedRoute) {
const api = parsedRoute.api;
console.log(api, parsedRoute)
const result = await runApi(api, query, body, params);
console.log('result', result)
return {
body: result,
headers: {
'Content-Type': 'application/json'
}
};
} else {
return await serveStatic(route);
}
} catch (error) {
console.log('error', error)
return {
body: error
}
}
}
async function streamResponse(generator, writeChunk) {
for await (const chunk of generator) {
writeChunk(chunk);
}
}
export const run: AzureFunction = async function (context: Context, req: HttpRequest) {
let route = req.params.route;
const query = req.query as unknown as any; // IMainQuery;
const body = req.body as unknown as IMainBody;
const params = req.params as unknown as IMainParams;
const result = await runInternal(route, query, body, params);
context.res = {
status: 200,
headers: result.headers,
};
if (typeof result.body[Symbol.asyncIterator] === 'function') {
context.res = {
isRaw: true,
body: ''
};
await streamResponse(result.body, chunk => {
context.res.body += JSON.stringify(chunk);
});
} else {
context.res = {
body: result.body
};
}
};

{
"bindings": [
{
"authLevel": "anonymous",
"name": "req",
"type": "httpTrigger",
"direction": "in",
"route": "{*route}",
"methods": [
"post",
"get"
]
},
{
"name": "$return",
"type": "http",
"direction": "out"
},
{
"name": "client",
"type": "durableClient",
"direction": "in"
}
],
"scriptFile": "../dist/main/index.js"

### LLM Blog Concept
*Tuesday, Jun 27 2022*
The idea behind this is that a blog post should be no more than 1 page (including picture, summary, insights, share links etc) and that any deeper info should be on a "need to know / pull" basis. ie, the details are there, but to get the user has to ask an LLM. The LLM would look into the backend to answer the question.
Thought 1: How am I going to host this concept? I don't have a web/blog yet and I only have a lambda, but lambdas are slow to start up and transferring blog content to them would be annoying.
Thought 1b: who cares, just proof of concept it then figure out how to make it perform
Thought 2: Asking questions is what gets people talking and involved with the thing, how can I use this tool to make things MORE social, not less social? ie. use it to lower the barrier of entry for social engagement, instead of making it a way to reject social involvement faster.
- Summarise & suggest top asked questions
- Allow people to sign the guest book in some way directly via LLM (plug in)
#### Implementation
I already have my vanjacloud lambda from earlier, and an empty web site. First I'm going to get my website talking directly with the lambda, then I'm going to make sure my lambda->chatgpt connection still works.
<me: a nice thing about this is that I can use this a stream of consciousness while building, which I think will help with motivation and focus. BRB making coffee>
Thought 3: I wonder if I can use an LLM to generate the main page from this log? Probably just as a guide, but I think i'll want to manually fine tune the main page for my aesthetic.
First I built a test button to prove the integration. It's important to do baby steps and especially at the bundaries. Prove the integration, then get the features. I got hit with the standard Access Control Origin issue, which I solved with chat GPT.
Hit a snag because I have two URLs (still havent decided on my branding strategy) and I have a prod/dev function set up: I'm trying to use the dev function but I think its old. I tried to 'swap' prod and dev in azure portal... but that just broke both. Previously only dev was broken. Faffin' about now trying to figure out if its something to do with my potentially old node version, Azure Function runtime is failing to start.
Tried to deploy -dev slot with VSCode, failed. Fk it, I don't need to use a dev slot today. Straigh to prod.
Ok can't go to prod, the openai typescript library uses imports with a tilde.. I guess this is new javascript syntax and my typescript isn't supporting it. Quick google shows nothing interesting, usual insane suggestions of modying tsc module configuration to try appease one package. Seems it runs and can find the files so my plan is to either update tsc, disable tsc fail on errors (hate this) or just remove openai until I get my integration done right.
What the actual fuck are these tilde imports, google barely knows about them and I see no direct connection to typescript only webpack/parcel which I don't use.
I tried to create a new typescript yarn project to see if openai works out of the box. I also tried to just upgrade typescript (mine was old 4.0 -> 4.9.x). No luck. I started filing a bug and in the process I tried to create a repro case. At first I had a repro, then after some organizing to file the bug it now works in my repro case? I tried to update my typescript and openai versions in my main project and still no luck. I tried to use the repro case's typescript config and no fix... I'm stumped. I think I'll go back to finishing my integration before I make this work.
I also forgot to mention I tried the classic rm -rf node_modules trick. Did not help. When I yarn remove'd openai, it still failed for the same openai error? Bizzare. I hate that this is broken because I hate python and I want node not to have stupid env errors like python so I can make fun of python-cels.
Ok ran my test curl and it now works on the prod endpoint. Time to back up the work-stack and remember wtf I was trying to do: make a website that calls this API.
yus it works:
![image-20230627224043978](./assets/image-20230627224043978.png)
Also, daily I think to myself: man I love chat GPT. I always knew about cmd+shift+4, now I learned I can press 'space'
![image-20230627224146554](./assets/image-20230627224146554.png)
(Also, I forgot, before all this started I prototyped the concept using playground.openai.com & my old blog post. I had to make sure it would be reasonable and it was...)
![image-20230627224729342](assets/llm-blog-concept/image-20230627224729342.png)
Anyway, now to get it to sent a message + print the response + upload to web...
Thought 4: I'm not sure ChatGPT makes my coding faster, but it sure makes it more fun. I prompted:
```
I want react html for a chat interface, very simple. I need to be able to send a message (text box, button, send on enter) and it needs to call a method i will write which is sendMessage(message), then sendMessage will take care of async getting a response from server and both message and response need to make it into some sort of chat box. very simple, do code
```
response was bigger than I expected / wanted, pretty sure I could have done this in like 5 lines to start but lets try it out before I knock it
```react
import React, { useState } from 'react';
const ChatInterface = () => {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState('');
const sendMessage = async (message) => {
// Add the message to the chat box
setMessages([...messages, { text: message, sender: 'user' }]);

});
const json = await response.json();
console.log(json);
return json;
}
```
Seems decent, lets plop it in...
Testing:
1. SendMessage works great, we get a good response
2. Our ChatInterface component is balls, fails with stack overflow on setMessages.
I will now investigate manually... after a short bio-break
Thought 5: I should timestamp some of my messages
22:58 brb
23:06 got banana
Ok where were we: fixing this web chat
Ok its not so bad, we accidentally have 2 methods of the same name probably not ChatGPTs fault
![image-20230627231147136](assets/llm-blog-concept/image-20230627231147136.png)
This is actually a bug, because setMessages is called twice but messages does not update...
I fixed in a clumsy way that still allows race conditions, will do a proper fix later.
23:13 Now I'm thinking that maybe I should quickly add another command to get the blog entry... so i'll have blog, which returns a blog text summary & id, and chat, which allows some sort of chat...
Wait, does chat need to send context as well? Yeah I don't think I can use lambda in any reasonable way (plus openai sends the context, may as well follow their pattern #goodprinciple: propagate patterns of your platform)
Thought: I am *heavily* emphasizing YAGNI, and only building what is needed in a scrappy way... but is that actually faster? Like long term, maybe I'm better off doing known best practices and building things mildly 'right'. Not sure which is more fun.. yagni I think has been more fun but I expect I'll get back to jumping some steps soon
Anyway, just deployed a blog and chat api, while its building I'm wondering if there are ways to move faster. ACCELERATE
I don't want to build my own blog software, that would suck, so I would want to hammer this into some existing blog software. Probably start with a github blog or something simple?
Ok deployed. With some small fix up the first api works (blog), but chat API is not returning any json content. Investigating in azure portal logs.
I don't like azure log view. This whole 'fake console' ux is ugly, and it says things like 'connected' then shows no output... Not reliable at all. Why are logs always unreliable?
Looks like the server is sending a reasonable response and Safari client is being weird. Checking network tab. Hmm, Safari insists the payload is empty. Visually comparing the two APIs to see what might be different... Maybe I should ask Chatgpt?
I reloaded the webpage to get both APIs to rexecute, then compared service logs. Looks like 2nd API is returning a promise which is breaking things. Will fix code now...
Async plumbed correctly in service. Wish typescript caught this error, but I guess I haven't got many types in place.
23:43 Commit & Push to deploy. I'm very grateful for auto-deploy pipelines. I look forward to the day I regret them, but for now its such a welcome change from the pain that is deployments at dayjob. I take my break reading the latest Stratechery about Starlink. Looks neat, pricey, but I have no need for it.
Sweet, remote works. Regrouping for final steps:
1. Push web component
2. Validate web component
3. Update service component with first blog post as proof of concept
4. Validate website
5. Share website with small group for feedback
Lets go!
Thought: I might take a small detour to figure out where to host these files. On second thoughts: nah, just gonna mash it in the service raw and see what happens
Pregunta: How will the images be handled? meh. will figure it out later
ALTHOUGH: it would be pretty sweet if my chat gpt thing also sometimes gave you images. that would be cool.
Just spent a moment visualizing how it might look for the user to use this, and how quick I could get this on to HackerNews, and how I would then convert those potential connections into something (direct to my twitter accounts)
What am I doing again? Oh thats right, I have a plan above 11:56pm
Sweet, step 1 worked flawlessly (I was expecting a build issue) and step 2 passed as well (sweet). Now to hardcode my blog into the service (this part is a bit meta, since I will need to put this incomplete text in...)
I love ChatGPT, again. Prompt:
```
I have a default azure typescript function project, i want to have a content folder with some markdown and images. I think i need to update something to make sure this content folder gets copied into dist/ for deploy. how?
```
The response was a bit mid, but I don't think its ChatGPTs fault, I think typescript build sucks. Please, can someone just admit that we need a proper compiler & build system OUTSIDE of the node project? sigh.
Anyway, having some issues just copying a folder recursively into dest. Not sure why its trying to get me to use the npm package 'copyfiles'. I think this is just `cp -r`? Yes it is. I ask why it gave me such effort:
```
I apologize for the confusion. The cp -r command is indeed a straightforward and simple way to copy files and directories in Unix-like systems. However, in the context of Node.js projects, it's common to use Node.js-based tools for tasks like this, as they are cross-platform and integrate well with the Node.js ecosystem.
```
First push is happening. Obviously storing these text files and images in here is a bit meh. Probably I'll keep the text in here but the images I'll pull out somewhere. Lets just get the ball rolling first to see the UX. Validating website, then adding chatgpt support. Sigh, I rememberd I need to fix that openai crap. Any my battery is running low.
00:23
Thought: could use chatgpt with this log to estimate how long something would take me to do... it would have past data of how long things take me...
Thought: with an LLM, it is now more-useful to do this 'stream of consciousness' style work system. Write infinite content while you work, then let the system figure out what to do with it. Over time, more filters and stuff can be added on to it... This would be v. interesting for big companies, but also for personal stuff. Would be even cooler if video was running 24/7 and you (I) were (was) constantly narrating what I'm doing to the camera. Soon!
Thought: Will be kinda funny when people start chatting wtih this thing and asking it deep psychological questions like "would you say the author is happy?" lmao
Success.
I would really like it if the chatbot could reply with images. I feel like that would be a warm 'surprise' for people. Like a cute easteregg, something that they're not yet accustomed to.
Ok next step is to get openai working. Sigh. Should I sleep and leave it for tomorrow, or hack away?
00:27 getting my charger
00:45 Running on fumes. Lets see if I can get openai to work before I fade
Had some nice thoughts about how this would look and how I would advertise it on various communities, how it would grow and how I could finetune an OpenAI model to reply more like me.
So turns out `yarn add openai` works just fine. No idea why it didn't work before? Although I did find some weird left over .js files that I deleted. I'm very close to having something up and running...
Ok I think I've wired up chatgpt with no context only messaging. Last bit remaining is wiring up the API key... where is that even saved?
00:57 pushing... will be strange if this works first try. I'm gonna load up the logs to be ready
github action failed, new key I just added is missing... Fixing
Man, github is so cool. Working with modern non-garbage tools is so cool. Dayjobs are the baddest (derogatory)
Taking a short twitter break while github builds... still fails? I think I need to rerun all jobs not just failing job. Trying again.
Ok I think I found the issue, adding a repo secret to github does not automatically reference that secret in a build script. You have to explicitly pull it into env. I don't like this tbh, seems like a lot of unneccesary work. Ah man it failed again wtf. Ok I didn't merge ChatGPT's response right. Env variables are pulled into a run step of a build script, not into a build script as a whole
Wow wtf. can't believe this worked first try. (Ok technically it failed at deployment like 10 times so its not really first try, but it kinda did work first try. Abstractions that work are great. Once I get my env set up things will be SO GOOD!!)
![image-20230628012301408](assets/llm-blog-concept/image-20230628012301408.png)
28/6/2023 22:43 Starting again
What needs doing? I think I want to fix the context first, then focus on polishing up the view + introduction. I'm thinking "what do i need to do, minimum, to show someone":
1. Fix chatgpt context
2. Add introductory text
3. Polish up the view, theme?
4. Might need to also add more bloggy content to flesh it out, about me etc
Getting distracted by youtube videos about quarter tone scales (arabic, persian, albanian)
23:48 got in a rabbit hole, trying to get a nice local testing environment set up. I'm trying to use wallaby.js, i really enjoy dev with that because its such a fast paced response. But mashing it into webstorm is ugly. It works much nicer in vscode. Tried to get it working in webstorm but it kept using old code. Had issues with WebStorm also suggesting imports deep into OpenAI that broke compiles. Almost got this working, I think this is important to invest time into and get it working nicely. Fast response to changes is very important...
00:01 still polishing up this test, getting a nice end to end where i call the function with my message & context, and it hits open ai + responds. I was missing context support so I'm adding that in, but having some issues exposing the internal message type on context. I'm half way between 'hack it in' and 'get the types right'. i'm finding i need some typing because i have params query and body and its kinda a pain to manage all 3 at once. REST is dumb, hate rest. I just need 1 place to input to my function why is it in 3 places making my life hard. Previously, I had it just accepting a body value on post... but at some point I started using all 3? I have no idea I wasn't paying attention. Should fix this...
00:40 kinda watching some music videos, kinda trying to focus on this. reading the fine tuning api notes to see if it can help. found this command: `openai api chat_completions.create -m gpt-3.5-turbo -g user hey` cool.
01:20 Took a detour to learn how to make finetunings. Just completed context + got it working in my test environment. Nice.
1:52 probably time to call it. my web <-> js interface is murky because i'm hacking at this instead of setting up types properly. Is this even faster than just doing things right?? idk, at least its kinda fun til I get stuck. Anyway, Parcel has decided to stop refreshing my website? wtf?
Ok mashing got parcel to update. Now my LLM is not acknowledging the existence of my backing content? for tomorrow:
1. Why is content not acknowledged anymore?
2. Add intro post
3. Style
Jun29 21:44
Tired. Working in slow motion. Found I lost the system prompt, readding that back in...
Fixed the system prompt. Added an intro post. Tried to keep it tight but also I'm pretty tired. I'm imaginging how this will launch and I'm being positive, but getting the quality up will be annoying.
Probably need to add a streaming api..

import * as fs from 'fs';
import dotenv from 'dotenv';
try {
dotenv.config();
} catch (err) {
console.info('Could not load dotenv file. Falling back to env variables.');
}
let values: any;
try {
let settingsJson;
try {
settingsJson = fs.readFileSync('local.settings.json', 'utf8');
} catch (err) {
console.info('Could not load local.settings.json file. Falling back to ../local.settings.json variables.');
settingsJson = fs.readFileSync('../vanjacloudjs/local.settings.json', 'utf8');
}
const settings = JSON.parse(settingsJson);
values = settings.Values;
} catch (err) {
console.info('Could not load settings file. Falling back to ../keys.json variables.');
try {
values = require('../keys.json');
} catch (err) {
console.info('Could not load ../keys.json file. Falling back to env variables.');
console.log(values)
values = {
OPENAI_KEY: process.env.OPENAI_KEY,
NOTION_SECRET: process.env.NOTION_SECRET,
SPOTIFY_CLIENTID: process.env.SPOTIFY_CLIENTID,
SPOTIFY_CLIENTSECRET: process.env.SPOTIFY_CLIENTSECRET,
TWITTER_API_KEY: process.env.TWITTER_API_KEY,
TWITTER_API_KEY_SECRET: process.env.TWITTER_API_KEY_SECRET,
TWITTER_BEARER_TOKEN: process.env.TWITTER_BEARER_TOKEN,
TWITTER_ACCESS_TOKEN: process.env.TWITTER_ACCESS_TOKEN,
TWITTER_ACCESS_TOKEN_SECRET: process.env.TWITTER_ACCESS_TOKEN_SECRET
}
}
console.info('Loaded env variables:',
Object.keys(values).map(k => `${k}: ${values[k]?.length}`));
}
export default {
openai: values.OPENAI_KEY,
notion: values.NOTION_SECRET,
spotify: {
clientId: values.SPOTIFY_CLIENTID,
clientSecret: values.SPOTIFY_CLIENTSECRET
},
twitter: {
consumer: {
apiKey: values.TWITTER_API_KEY,
apiKeySecret: values.TWITTER_API_KEY_SECRET
},
bearerToken: values.TWITTER_BEARER_TOKEN,
accessToken: values.TWITTER_ACCESS_TOKEN,
accessTokenSecret: values.TWITTER_ACCESS_TOKEN_SECRET,
oauthToken: values.TWITTER_OAUTH_TOKEN
}

I also found the following external resources that might be helpful:

Summaries of links found in the content:

async function runApi(
:

The page provided is a link to a specific file on GitHub. The user is asking for help in refactoring their code to have each API implementation in its own file, instead of having them all in one file called index.ts. They also want to have a single implementation for the run() function that can handle all API types, instead of manually adding an if branch for each API. The user mentions using an enum to function mapping. There is one comment on the page, but it does not provide any relevant information.


Step 2: 🧐 Snippet Analysis

From looking through the relevant snippets, I decided to make the following modifications:

File Path Proposed Changes
main/index.ts Refactor the run() function to handle all API types dynamically using an enum to function mapping. Remove the individual API implementations and replace them with calls to the corresponding functions in the new API-specific files.
main/api1.ts Create this file and move the implementation of the first API from main/index.ts to this file. Export a function that implements this API.
main/api2.ts Create this file and move the implementation of the second API from main/index.ts to this file. Export a function that implements this API.
main/api3.ts Create this file and move the implementation of the third API from main/index.ts to this file. Export a function that implements this API.
main/api4.ts Create this file and move the implementation of the fourth API from main/index.ts to this file. Export a function that implements this API.
main/api5.ts Create this file and move the implementation of the fifth API from main/index.ts to this file. Export a function that implements this API.

Step 3: 📝 Planning

I have created a plan for writing the pull request. I am now working my plan and coding the required changes to address this issue. Here is the planned pull request:

Refactor 'run' method to use an enum per API, and a file per API
sweep/refactor-run-method

Description

This PR refactors the codebase to improve modularity and scalability by separating each API implementation into its own file and dynamically handling all API types in the run() function using an enum to function mapping.

Summary of Changes

  • Refactored the run() function in main/index.ts to handle all API types dynamically using an enum to function mapping.
  • Created new files for each API in the main/ directory and moved the corresponding code from main/index.ts to these new files.
  • Exported a function for each API implementation in their respective files.
  • Updated the run() function to call the appropriate API function based on the API type.
  • Removed the individual API implementations from main/index.ts.
  • Improved code organization and maintainability.

Step 4: ⌨️ Coding

I have finished coding the issue. I am now reviewing it for completeness.


Step 5: 🔁 Code Review

Success! 🚀


To recreate the pull request, leave a comment prefixed with "sweep:" or edit the issue.
Join Our Discord

@wwzeng1
Copy link

wwzeng1 commented Aug 3, 2023

Sweep: retry

@wwzeng1
Copy link

wwzeng1 commented Aug 3, 2023

Apologies, we had a bug. This should be resolved now, and I added more tickets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sweep Assigns Sweep to an issue or pull request.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants