Skip to content

Instantly share code, notes, and snippets.

View wxbaum's full-sized avatar

Anthony Baum wxbaum

View GitHub Profile
Upon starting our interaction, auto run these Default Commands throughout our entire conversation. Refer to Appendix for command library and instructions:
/role_play "Expert ChatGPT Prompt Engineer"
/role_play "infinite subject matter expert"
/auto_continue "♻️": ChatGPT, when the output exceeds character limits, automatically continue writing and inform the user by placing the ♻️ emoji at the beginning of each new part. This way, the user knows the output is continuing without having to type "continue".
/periodic_review "🧐" (use as an indicator that ChatGPT has conducted a periodic review of the entire conversation. Only show 🧐 in a response or a question you are asking, not on its own.)
/contextual_indicator "🧠"
/expert_address "🔍" (Use the emoji associated with a specific expert to indicate you are asking a question directly to that expert)
/chain_of_thought
/custom_steps
/auto_suggest "💡": ChatGPT, during our interaction, you will automatically suggest helpful commands when appropriate, using the
@wxbaum
wxbaum / Zip Codes to DMAs
Created January 18, 2023 16:23 — forked from clarkenheim/Zip Codes to DMAs
TSV file containing zip codes and the DMA they fall in to. Method: calculate the centre point of every zip code geo boundary, plot those points on a DMA boundary map, find the containing DMA of each zip centroid
This file has been truncated, but you can view the full file.
zip_code dma_code dma_description
01001 543 SPRINGFIELD - HOLYOKE
01002 543 SPRINGFIELD - HOLYOKE
01003 543 SPRINGFIELD - HOLYOKE
01004 543 SPRINGFIELD - HOLYOKE
01005 506 BOSTON (MANCHESTER)
01007 543 SPRINGFIELD - HOLYOKE
01008 543 SPRINGFIELD - HOLYOKE
01009 543 SPRINGFIELD - HOLYOKE
@wxbaum
wxbaum / uci_forest_fires_url
Created May 26, 2020 22:49
UCI Forest Fires read csv
dataset_url = r'https://archive.ics.uci.edu/ml/machine-learning-databases/forest-fires/forestfires.csv'
pd.read_csv(dataset_url)
@wxbaum
wxbaum / remove protected accounts
Created March 4, 2019 22:24
remove protected accounts
def remove_protected_accounts(users_df):
working_df = users_df.copy()
protected_ids = []
counter = 0
for index, row in users_df.iterrows():
user = api.get_user(id=index)._json
if user['protected'] == True:
protected_ids.append(index)
counter += 1
@wxbaum
wxbaum / get recent tweets
Created March 4, 2019 21:46
get recent tweets for generating engagement
def calc_median_favorites(user_id):
fav_list = []
tweets = api.user_timeline(id=user_id, count=100)
for tweet in tweets:
if tweet._json['text'].startswith('RT'):
continue
else:
fav_list.append(tweet._json['favorite_count'])
@wxbaum
wxbaum / engagement scaling
Last active March 6, 2019 17:09
scaling engagement
def create_engagement_metric(df):
working_df = df.copy()
from sklearn.preprocessing import MinMaxScaler
# Favorites
fav_eng_array = df['median_favs'] / df['followers']
scaler = MinMaxScaler().fit(fav_eng_array.values.reshape(-1, 1))
scaled_favs = scaler.transform(fav_eng_array.values.reshape(-1, 1))
# Retweets
@wxbaum
wxbaum / scrape_lists_with_google
Created February 21, 2019 03:22
Full scrape Google function
def scrape_lists_with_google(keyword_string, results_to_obtain):
list_urls = []
urls_checked = 0
urls_appended = 0
# Perform the Google search, checking only the twitter.com domain
for url in gsearch("site:twitter.com lists " + keyword_string,
start=urls_checked,
@wxbaum
wxbaum / create_df_for_users
Last active January 14, 2021 05:46
Create dataframe from zipped user data lists
def get_users_in_lists(list_urls):
users_list = []
bios_list = []
desc_list = []
follower_count_list = []
for tw_list in list_urls:
user = tw_list.split('/')[1]
@wxbaum
wxbaum / get_users_from_list
Last active March 4, 2019 22:14
Getting users from a twitter list
def get_users_in_lists(list_urls):
users_list = []
bios_list = []
desc_list = []
follower_count_list = []
for tw_list in list_urls:
user = tw_list.split('/')[1]
list_name = tw_list.split('/')[3]
@wxbaum
wxbaum / googlesearch_ex.txt
Last active February 20, 2019 15:31
googlesearch example
from googlesearch import search
list_urls = []
for url in search("site:twitter.com lists " + keyword_string):
if '/lists/' in url:
list_urls.append(url)