nazneen commited on
Commit
3562e2d
1 Parent(s): 731f587

error-analysis app

Browse files
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md CHANGED
@@ -1,13 +1,9 @@
1
  ---
2
- title: Error Analysis
3
- emoji: 😻
4
- colorFrom: red
5
- colorTo: green
6
  sdk: streamlit
7
- sdk_version: 1.9.0
8
  app_file: app.py
9
  pinned: false
10
- license: apache-2.0
11
  ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
1
  ---
2
+ title: Interactive Error Analysis
3
+ emoji: 🐛
4
+ colorFrom: yellow
5
+ colorTo: orange
6
  sdk: streamlit
 
7
  app_file: app.py
8
  pinned: false
 
9
  ---
 
 
app.py ADDED
@@ -0,0 +1,283 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## LIBRARIES ###
2
+ ## Data
3
+ import numpy as np
4
+ import pandas as pd
5
+ import torch
6
+ import json
7
+ from tqdm import tqdm
8
+ from math import floor
9
+ from datasets import load_dataset
10
+ from collections import defaultdict
11
+ from transformers import AutoTokenizer
12
+ pd.options.display.float_format = '${:,.2f}'.format
13
+
14
+ # Analysis
15
+ # from gensim.models.doc2vec import Doc2Vec
16
+ # from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score
17
+ import nltk
18
+ from nltk.cluster import KMeansClusterer
19
+ import scipy.spatial.distance as sdist
20
+ from scipy.spatial import distance_matrix
21
+ # nltk.download('punkt') #make sure that punkt is downloaded
22
+
23
+ # App & Visualization
24
+ import streamlit as st
25
+ import altair as alt
26
+ import plotly.graph_objects as go
27
+ from streamlit_vega_lite import altair_component
28
+
29
+
30
+
31
+ # utils
32
+ from random import sample
33
+ from error_analysis import utils as ut
34
+
35
+
36
+ def down_samp(embedding):
37
+ """Down sample a data frame for altiar visualization """
38
+ # total number of positive and negative sentiments in the class
39
+ #embedding = embedding.groupby('slice').apply(lambda x: x.sample(frac=0.3))
40
+ total_size = embedding.groupby(['slice','label'], as_index=False).count()
41
+
42
+ user_data = 0
43
+ # if 'Your Sentences' in str(total_size['slice']):
44
+ # tmp = embedding.groupby(['slice'], as_index=False).count()
45
+ # val = int(tmp[tmp['slice'] == "Your Sentences"]['source'])
46
+ # user_data = val
47
+
48
+ max_sample = total_size.groupby('slice').max()['content']
49
+
50
+ # # down sample to meeting altair's max values
51
+ # # but keep the proportional representation of groups
52
+ down_samp = 1/(sum(max_sample.astype(float))/(1000-user_data))
53
+
54
+ max_samp = max_sample.apply(lambda x: floor(x*down_samp)).astype(int).to_dict()
55
+ max_samp['Your Sentences'] = user_data
56
+
57
+ # # sample down for each group in the data frame
58
+ embedding = embedding.groupby('slice').apply(lambda x: x.sample(n=max_samp.get(x.name))).reset_index(drop=True)
59
+
60
+ # # order the embedding
61
+ return(embedding)
62
+
63
+
64
+ def data_comparison(df):
65
+ selection = alt.selection_multi(fields=['cluster:N','label:O'])
66
+ color = alt.condition(alt.datum.slice == 'high-loss', alt.Color('cluster:N', scale = alt.Scale(domain=df.cluster.unique().tolist())), alt.value("lightgray"))
67
+ opacity = alt.condition(selection, alt.value(0.7), alt.value(0.25))
68
+
69
+ # basic chart
70
+ scatter = alt.Chart(df).mark_point(size=100, filled=True).encode(
71
+ x=alt.X('x:Q', axis=None),
72
+ y=alt.Y('y:Q', axis=None),
73
+ color=color,
74
+ shape=alt.Shape('label:O', scale=alt.Scale(range=['circle', 'diamond'])),
75
+ tooltip=['cluster:N','slice:N','content:N','label:O','pred:O'],
76
+ opacity=opacity
77
+ ).properties(
78
+ width=1000,
79
+ height=800
80
+ ).interactive()
81
+
82
+ legend = alt.Chart(df).mark_point(size=100, filled=True).encode(
83
+ x=alt.X("label:O"),
84
+ y=alt.Y('cluster:N', axis=alt.Axis(orient='right'), title=""),
85
+ shape=alt.Shape('label:O', scale=alt.Scale(
86
+ range=['circle', 'diamond']), legend=None),
87
+ color=color,
88
+ ).add_selection(
89
+ selection
90
+ )
91
+ layered = scatter | legend
92
+ layered = layered.configure_axis(
93
+ grid=False
94
+ ).configure_view(
95
+ strokeOpacity=0
96
+ )
97
+ return layered
98
+
99
+ def quant_panel(embedding_df):
100
+ """ Quantitative Panel Layout"""
101
+ all_metrics = {}
102
+ st.warning("**Error slice visualization**")
103
+ with st.expander("How to read this chart:"):
104
+ st.markdown("* Each **point** is an input example.")
105
+ st.markdown("* Gray points have low-loss and the colored have high-loss. High-loss instances are clustered using **kmeans** and each color represents a cluster.")
106
+ st.markdown("* The **shape** of each point reflects the label category -- positive (diamond) or negative sentiment (circle).")
107
+ st.altair_chart(data_comparison(down_samp(embedding_df)), use_container_width=True)
108
+
109
+
110
+ def frequent_tokens(data, tokenizer, loss_quantile=0.95, top_k=200, smoothing=0.005):
111
+ unique_tokens = []
112
+ tokens = []
113
+ for row in tqdm(data['content']):
114
+ tokenized = tokenizer(row,padding=True, return_tensors='pt')
115
+ tokens.append(tokenized['input_ids'].flatten())
116
+ unique_tokens.append(torch.unique(tokenized['input_ids']))
117
+ losses = data['loss'].astype(float)
118
+ high_loss = losses.quantile(loss_quantile)
119
+ loss_weights = (losses > high_loss)
120
+ loss_weights = loss_weights / loss_weights.sum()
121
+ token_frequencies = defaultdict(float)
122
+ token_frequencies_error = defaultdict(float)
123
+
124
+ weights_uniform = np.full_like(loss_weights, 1 / len(loss_weights))
125
+
126
+ num_examples = len(data)
127
+ for i in tqdm(range(num_examples)):
128
+ for token in unique_tokens[i]:
129
+ token_frequencies[token.item()] += weights_uniform[i]
130
+ token_frequencies_error[token.item()] += loss_weights[i]
131
+
132
+ token_lrs = {k: (smoothing+token_frequencies_error[k]) / (smoothing+token_frequencies[k]) for k in token_frequencies}
133
+ tokens_sorted = list(map(lambda x: x[0], sorted(token_lrs.items(), key=lambda x: x[1])[::-1]))
134
+
135
+ top_tokens = []
136
+ for i, (token) in enumerate(tokens_sorted[:top_k]):
137
+ top_tokens.append(['%10s' % (tokenizer.decode(token)), '%.4f' % (token_frequencies[token]), '%.4f' % (
138
+ token_frequencies_error[token]), '%4.2f' % (token_lrs[token])])
139
+ return pd.DataFrame(top_tokens, columns=['Token', 'Freq', 'Freq error slice', 'lrs'])
140
+
141
+
142
+ @st.cache(ttl=600)
143
+ def get_data(inference, emb):
144
+ preds = inference.outputs.numpy()
145
+ losses = inference.losses.numpy()
146
+ embeddings = pd.DataFrame(emb, columns=['x', 'y'])
147
+ num_examples = len(losses)
148
+ # dataset_labels = [dataset[i]['label'] for i in range(num_examples)]
149
+ return pd.concat([pd.DataFrame(np.transpose(np.vstack([dataset[:num_examples]['content'],
150
+ dataset[:num_examples]['label'], preds, losses])), columns=['content', 'label', 'pred', 'loss']), embeddings], axis=1)
151
+
152
+ def clustering(data,num_clusters):
153
+ X = np.array(data['embedding'].tolist())
154
+ kclusterer = KMeansClusterer(
155
+ num_clusters, distance=nltk.cluster.util.cosine_distance,
156
+ repeats=25,avoid_empty_clusters=True)
157
+ assigned_clusters = kclusterer.cluster(X, assign_clusters=True)
158
+ data['cluster'] = pd.Series(assigned_clusters, index=data.index).astype('int')
159
+ data['centroid'] = data['cluster'].apply(lambda x: kclusterer.means()[x])
160
+ return data, assigned_clusters
161
+
162
+ def kmeans(df, num_clusters=3):
163
+ data_hl = df.loc[df['slice'] == 'high-loss']
164
+ data_kmeans,clusters = clustering(data_hl,num_clusters)
165
+ merged = pd.merge(df, data_kmeans, left_index=True, right_index=True, how='outer', suffixes=('', '_y'))
166
+ merged.drop(merged.filter(regex='_y$').columns.tolist(),axis=1,inplace=True)
167
+ merged['cluster'] = merged['cluster'].fillna(num_clusters).astype('int')
168
+ return merged
169
+
170
+ def distance_from_centroid(row):
171
+ return sdist.norm(row['embedding'] - row['centroid'].tolist())
172
+
173
+ @st.cache(ttl=600)
174
+ def topic_distribution(weights, smoothing=0.01):
175
+ topic_frequencies = defaultdict(float)
176
+ topic_frequencies_spotlight = defaultdict(float)
177
+ weights_uniform = np.full_like(weights, 1 / len(weights))
178
+ num_examples = len(weights)
179
+ for i in range(num_examples):
180
+ example = dataset[i]
181
+ category = example['title']
182
+ topic_frequencies[category] += weights_uniform[i]
183
+ topic_frequencies_spotlight[category] += weights[i]
184
+
185
+ topic_ratios = {c: (smoothing + topic_frequencies_spotlight[c]) / (
186
+ smoothing + topic_frequencies[c]) for c in topic_frequencies}
187
+
188
+ categories_sorted = map(lambda x: x[0], sorted(
189
+ topic_ratios.items(), key=lambda x: x[1], reverse=True))
190
+
191
+ topic_distr = []
192
+ for category in categories_sorted:
193
+ topic_distr.append(['%.3f' % topic_frequencies[category], '%.3f' %
194
+ topic_frequencies_spotlight[category], '%.2f' % topic_ratios[category], '%s' % category])
195
+
196
+ return pd.DataFrame(topic_distr, columns=['Overall frequency', 'Error frequency', 'Ratio', 'Category'])
197
+ # for category in categories_sorted:
198
+ # return(topic_frequencies[category], topic_frequencies_spotlight[category], topic_ratios[category], category)
199
+
200
+ def populate_session(dataset,model):
201
+ data_df = read_file_to_df('./assets/data/'+dataset+ '_'+ model+'.parquet')
202
+ if model == 'albert-base-v2-yelp-polarity':
203
+ tokenizer = AutoTokenizer.from_pretrained('textattack/'+model)
204
+ else:
205
+ tokenizer = AutoTokenizer.from_pretrained(model)
206
+ if "user_data" not in st.session_state:
207
+ st.session_state["user_data"] = data_df
208
+ if "selected_slice" not in st.session_state:
209
+ st.session_state["selected_slice"] = None
210
+
211
+ @st.cache(allow_output_mutation=True)
212
+ def read_file_to_df(file):
213
+ return pd.read_parquet(file)
214
+
215
+ if __name__ == "__main__":
216
+ ### STREAMLIT APP CONGFIG ###
217
+ st.set_page_config(layout="wide", page_title="Interactive Error Analysis")
218
+
219
+ ut.init_style()
220
+
221
+ lcol, rcol = st.columns([2, 2])
222
+ # ******* loading the mode and the data
223
+ #st.sidebar.mardown("<h4>Interactive Error Analysis</h4>", unsafe_allow_html=True)
224
+
225
+ dataset = st.sidebar.selectbox(
226
+ "Dataset",
227
+ ["amazon_polarity", "yelp_polarity"],
228
+ index = 1
229
+ )
230
+
231
+ model = st.sidebar.selectbox(
232
+ "Model",
233
+ ["distilbert-base-uncased-finetuned-sst-2-english",
234
+ "albert-base-v2-yelp-polarity"],
235
+ )
236
+
237
+ ### LOAD DATA AND SESSION VARIABLES ###
238
+ ##uncomment the next next line to run dynamically and not from file
239
+ #populate_session(dataset, model)
240
+ data_df = read_file_to_df('./assets/data/'+dataset+ '_'+ model+'.parquet')
241
+ loss_quantile = st.sidebar.slider(
242
+ "Loss Quantile", min_value=0.5, max_value=1.0,step=0.01,value=0.95
243
+ )
244
+ data_df['loss'] = data_df['loss'].astype(float)
245
+ losses = data_df['loss']
246
+ high_loss = losses.quantile(loss_quantile)
247
+ data_df['slice'] = 'high-loss'
248
+ data_df['slice'] = data_df['slice'].where(data_df['loss'] > high_loss, 'low-loss')
249
+
250
+ with rcol:
251
+ with st.spinner(text='loading...'):
252
+ st.markdown('<h3>Word Distribution in Error Slice</h3>', unsafe_allow_html=True)
253
+ #uncomment the next two lines to run dynamically and not from file
254
+ #commontokens = frequent_tokens(data_df, tokenizer, loss_quantile=loss_quantile)
255
+ commontokens = read_file_to_df('./assets/data/'+dataset+ '_'+ model+'_commontokens.parquet')
256
+ with st.expander("How to read the table:"):
257
+ st.markdown("* The table displays the most frequent tokens in error slices, relative to their frequencies in the val set.")
258
+ st.write(commontokens)
259
+
260
+ run_kmeans = st.sidebar.radio("Cluster error slice?", ('True', 'False'), index=0)
261
+
262
+ num_clusters = st.sidebar.slider("# clusters", min_value=1, max_value=20, step=1, value=3)
263
+
264
+ if run_kmeans == 'True':
265
+ with st.spinner(text='running kmeans...'):
266
+ merged = kmeans(data_df,num_clusters=num_clusters)
267
+ with lcol:
268
+ st.markdown('<h3>Error Slices</h3>',unsafe_allow_html=True)
269
+ with st.expander("How to read the table:"):
270
+ st.markdown("* *Error slice* refers to the subset of evaluation dataset the model performs poorly on.")
271
+ st.markdown("* The table displays model error slices on the evaluation dataset, sorted by loss.")
272
+ st.markdown("* Each row is an input example that includes the label, model pred, loss, and error cluster.")
273
+ with st.spinner(text='loading error slice...'):
274
+ dataframe=read_file_to_df('./assets/data/'+dataset+ '_'+ model+'_error-slices.parquet')
275
+ #uncomment the next next line to run dynamically and not from file
276
+ # dataframe = merged[['content', 'label', 'pred', 'loss', 'cluster']].sort_values(
277
+ # by=['loss'], ascending=False)
278
+ # table_html = dataframe.to_html(
279
+ # columns=['content', 'label', 'pred', 'loss', 'cluster'], max_rows=50)
280
+ # table_html = table_html.replace("<th>", '<th align="left">') # left-align the headers
281
+ st.write(dataframe,width=900, height=300)
282
+ with st.spinner(text='loading visualization...'):
283
+ quant_panel(merged)
assets/data/amazon_polarity.test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e57ae9ce39c5251e432b4a6dce31915782276b98a7751281eb66b8cff3b46b6
3
+ size 5864011
assets/data/amazon_polarity_albert-base-v2-yelp-polarity.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bce0297bedc66865c01644421ea934008d74807befb7b0bd94aa92729bd02a59
3
+ size 56644779
assets/data/amazon_polarity_albert-base-v2-yelp-polarity_commontokens.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de69efcda9ab5c3aa8dc616c016cace08096cbc21478dd894f9cccf0b843ede4
3
+ size 6067
assets/data/amazon_polarity_albert-base-v2-yelp-polarity_error-slices.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62ce63230551fe9870919f051dfeead6892bb917ba63c7edfcc0e819867ed2cd
3
+ size 5954640
assets/data/amazon_polarity_distilbert-base-uncased-finetuned-sst-2-english.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a193c26851f48b7b76a35986ced0dc1fddafd26b92f1aaf9a4e69fd83fd2f2e4
3
+ size 56643545
assets/data/amazon_polarity_distilbert-base-uncased-finetuned-sst-2-english_commontokens.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de69efcda9ab5c3aa8dc616c016cace08096cbc21478dd894f9cccf0b843ede4
3
+ size 6067
assets/data/amazon_polarity_distilbert-base-uncased-finetuned-sst-2-english_error-slices.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6626d19361cfe06ba70be19004d18eb23a1764926d15ed5b103ec36fc2d8eaea
3
+ size 5954642
assets/data/amazon_test_emb.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaf8bcf2691858dc39d0f83872f425a839b05de4b916c111cba6e9c69747e467
3
+ size 50449725
assets/data/yelp_polarity_albert-base-v2-yelp-polarity.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a56147880841c6f78a868fb58f6e97661547009e570c2887ef7c12ffd54474e
3
+ size 103294569
assets/data/yelp_polarity_albert-base-v2-yelp-polarity_commontokens.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e5241d13a23656bbab7851d4a8ad1df70d8675e9db62f3e9ade719b41c524db
3
+ size 6681
assets/data/yelp_polarity_albert-base-v2-yelp-polarity_error-slices.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7140867800b4e9ac5f34b5c181ec69c261638582ef55c142aaf23320e9e56743
3
+ size 9765767
assets/data/yelp_polarity_distilbert-base-uncased-finetuned-sst-2-english.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:165515be2837df9b02f782fe1e7bd3b31bb01c49960e73238f77541eee7589ad
3
+ size 61796202
assets/data/yelp_polarity_distilbert-base-uncased-finetuned-sst-2-english_commontokens.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c243229a8f55a1910fcab1823cd4810d51a59c6442244a47ee5ee621da069518
3
+ size 6509
assets/data/yelp_polarity_distilbert-base-uncased-finetuned-sst-2-english_error-slices.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ec890d0df88c0d7fdaba7ca9e50f715c50d173f99ae021fd4c49534d4ef12a9
3
+ size 9803781
error_analysis/utils/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .style_hacks import *
error_analysis/utils/__pycache__/__init__.cpython-39.pyc ADDED
Binary file (204 Bytes). View file
 
error_analysis/utils/__pycache__/style_hacks.cpython-39.pyc ADDED
Binary file (2.11 kB). View file
 
error_analysis/utils/style_hacks.py ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ placeholder for all streamlit style hacks
3
+ """
4
+ import streamlit as st
5
+
6
+
7
+ def init_style():
8
+ return st.markdown(
9
+ """
10
+ <style>
11
+ /* Side Bar */
12
+ [data-testid="stSidebar"][aria-expanded="true"] > div:first-child {
13
+ width: 250px;
14
+ }
15
+ [data-testid="stSidebar"][aria-expanded="false"] > div:first-child {
16
+ width: 250px;
17
+ }
18
+ [data-testid="stSidebar"]{
19
+ flex-basis: unset;
20
+ }
21
+ .css-1outpf7 {
22
+ background-color:rgb(254 244 219);
23
+ width:10rem;
24
+ padding:10px 10px 10px 10px;
25
+ }
26
+
27
+ /* Main Panel*/
28
+ .css-18e3th9 {
29
+ padding:10px 10px 10px -200px;
30
+ }
31
+ .css-1ubw6au:last-child{
32
+ background-color:lightblue;
33
+ }
34
+
35
+ /* Model Panels : element-container */
36
+ .element-container{
37
+ border-style:none
38
+ }
39
+
40
+ /* Radio Button Direction*/
41
+ div.row-widget.stRadio > div{flex-direction:row;}
42
+
43
+ /* Expander Boz*/
44
+ .streamlit-expander {
45
+ border-width: 0px;
46
+ border-bottom: 1px solid #A29C9B;
47
+ border-radius: 10px;
48
+ }
49
+
50
+ .streamlit-expanderHeader {
51
+ font-style: italic;
52
+ font-weight :600;
53
+ font-size:16px;
54
+ padding-top:0px;
55
+ padding-left: 0px;
56
+ color:#A29C9B
57
+
58
+ /* Section Headers */
59
+ .sectionHeader {
60
+ font-size:10px;
61
+ }
62
+ [data-testid="stMarkdownContainer]{
63
+ font-family: sans-serif;
64
+ font-weight: 500;
65
+ font-size: 1.5 rem !important;
66
+ color: rgb(250, 250, 250);
67
+ padding: 1.25rem 0px 1rem;
68
+ margin: 0px;
69
+ line-height: 1.4;
70
+ }
71
+
72
+ /* text input*/
73
+ .st-e5 {
74
+ background-color:lightblue;
75
+ }
76
+ /*line special*/
77
+ .line-one{
78
+ border-width: 0px;
79
+ border-bottom: 1px solid #A29C9B;
80
+ border-radius: 50px;
81
+ }
82
+
83
+ </style>
84
+ """,
85
+ unsafe_allow_html=True,
86
+ )
requirements.txt ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file may be used to create an environment using:
2
+ # $ conda create --name <env> --file <this file>
3
+ # platform: osx-arm64
4
+ absl-py==1.0.0; python_version >= '3.6'
5
+ aiohttp==3.8.0
6
+ aiosignal==1.2.0; python_version >= '3.6'
7
+ altair==4.1.0
8
+ antlr4-python3-runtime==4.8
9
+ appnope==0.1.2; sys_platform == 'darwin' and platform_system == 'Darwin'
10
+ argon2-cffi==21.1.0; python_version >= '3.5'
11
+ astor==0.8.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
12
+ async-timeout==4.0.1; python_version >= '3.6'
13
+ attrs==21.2.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
14
+ backcall==0.2.0
15
+ backports.zoneinfo==0.2.1; python_version >= '3.6' and python_version < '3.9'
16
+ base58==2.1.1; python_version >= '3.5'
17
+ bleach==4.1.0; python_version >= '3.6'
18
+ blinker==1.4
19
+ cachetools==4.2.4; python_version ~= '3.5'
20
+ certifi==2021.10.8
21
+ cffi==1.15.0
22
+ charset-normalizer==2.0.7; python_version >= '3'
23
+ click==7.1.2; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
24
+ cython==0.29.24; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
25
+ cytoolz==0.11.2; python_version >= '3.5'
26
+ dataclasses==0.6
27
+ datasets==1.15.1
28
+ debugpy==1.5.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
29
+ decorator==5.1.0; python_version >= '3.5'
30
+ defusedxml==0.7.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
31
+ dill==0.3.4; python_version >= '2.7' and python_version != '3.0'
32
+ entrypoints==0.3; python_version >= '2.7'
33
+ fastbpe==0.1.0
34
+ filelock==3.3.2; python_version >= '3.6'
35
+ frozenlist==1.2.0; python_version >= '3.6'
36
+ fsspec[http]==2021.11.0; python_version >= '3.6'
37
+ future==0.18.2; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
38
+ fuzzywuzzy==0.18.0
39
+ gitdb==4.0.9; python_version >= '3.6'
40
+ gitpython==3.1.24; python_version >= '3.7'
41
+ google-auth-oauthlib==0.4.6; python_version >= '3.6'
42
+ google-auth==2.3.3; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'
43
+ grpcio==1.41.1
44
+ idna==3.3; python_version >= '3'
45
+ importlib-resources==5.4.0; python_version < '3.9'
46
+ kaleido==0.2.1
47
+ markdown==3.3.4; python_version >= '3.6'
48
+ markupsafe==2.0.1; python_version >= '3.6'
49
+ matplotlib-inline==0.1.3; python_version >= '3.5'
50
+ meerkat-ml==0.1.2; python_version >= '3.7'
51
+ mistune==0.8.4
52
+ multidict==5.2.0; python_version >= '3.6'
53
+ multiprocess==0.70.12.2
54
+ nbclient==0.5.8; python_full_version >= '3.6.1'
55
+ nbconvert==6.3.0; python_version >= '3.7'
56
+ nbformat==5.1.3; python_version >= '3.5'
57
+ nest-asyncio==1.5.1; python_version >= '3.5'
58
+ nltk==3.6.5
59
+ notebook==6.4.5; python_version >= '3.6'
60
+ numpy==1.21.4
61
+ oauthlib==3.1.1; python_version >= '3.6'
62
+ omegaconf==2.1.1; python_version >= '3.6'
63
+ packaging==21.2; python_version >= '3.6'
64
+ pandas==1.3.4
65
+ pandocfilters==1.5.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
66
+ parso==0.8.2; python_version >= '3.6'
67
+ pexpect==4.8.0; sys_platform != 'win32'
68
+ pickleshare==0.7.5
69
+ pillow==8.4.0; python_version >= '3.6'
70
+ plotly==5.3.1
71
+ progressbar==2.5
72
+ prometheus-client==0.12.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
73
+ prompt-toolkit==3.0.22; python_full_version >= '3.6.2'
74
+ protobuf==3.19.1; python_version >= '3.5'
75
+ ptyprocess==0.7.0; os_name != 'nt'
76
+ pyahocorasick==1.4.2
77
+ pyarrow==6.0.0; python_version >= '3.6'
78
+ pyasn1-modules==0.2.8
79
+ pyasn1==0.4.8
80
+ pycparser==2.21; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
81
+ pydeck==0.7.1; python_version >= '3.7'
82
+ pydeprecate==0.3.1; python_version >= '3.6'
83
+ pygments==2.10.0; python_version >= '3.5'
84
+ pympler==0.9
85
+ pyparsing==2.4.7; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
86
+ pyrsistent==0.18.0; python_version >= '3.6'
87
+ python-dateutil==2.8.2; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
88
+ python-levenshtein==0.12.2
89
+ pytorch-lightning==1.5.1; python_version >= '3.6'
90
+ pytz-deprecation-shim==0.1.0.post0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'
91
+ pytz==2021.3
92
+ pyyaml==6.0; python_version >= '3.6'
93
+ pyzmq==22.3.0; python_version >= '3.6'
94
+ regex==2021.11.10
95
+ requests-oauthlib==1.3.0
96
+ requests==2.26.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'
97
+ robustnessgym==0.1.3
98
+ rsa==4.7.2; python_version >= '3.6'
99
+ sacremoses==0.0.46
100
+ scikit-learn==1.0.1; python_version >= '3.7'
101
+ scipy==1.7.2; python_version < '3.11' and python_version >= '3.7'
102
+ semver==2.13.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
103
+ send2trash==1.8.0
104
+ six==1.16.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
105
+ sklearn==0.0
106
+ smart-open==5.2.1; python_version >= '3.6' and python_version < '4'
107
+ smmap==5.0.0; python_version >= '3.6'
108
+ streamlit-vega-lite==0.1.0
109
+ streamlit==1.2.0
110
+ tenacity==8.0.1; python_version >= '3.6'
111
+ tensorboard-data-server==0.6.1; python_version >= '3.6'
112
+ tensorboard-plugin-wit==1.8.0
113
+ tensorboard==2.7.0; python_version >= '3.6'
114
+ terminado==0.12.1; python_version >= '3.6'
115
+ testpath==0.5.0; python_version >= '3.5'
116
+ threadpoolctl==3.0.0; python_version >= '3.6'
117
+ tokenizers==0.10.3
118
+ toml==0.10.2; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
119
+ toolz==0.11.2; python_version >= '3.5'
120
+ torch==1.10.0; python_full_version >= '3.6.2'
121
+ torchmetrics==0.6.0; python_version >= '3.6'
122
+ tornado==6.1; python_version >= '3.5'
123
+ tqdm==4.62.3; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
124
+ traitlets==5.1.1; python_version >= '3.7'
125
+ transformers==4.12.3; python_version >= '3.6'
126
+ typing-extensions==3.10.0.2; python_version < '3.10'
127
+ tzdata==2021.5; python_version >= '3.6'
128
+ tzlocal==4.1; python_version >= '3.6'
129
+ ujson==4.2.0; python_version >= '3.6'
130
+ urllib3==1.26.7; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4' and python_version < '4'
131
+ validators==0.18.2; python_version >= '3.4'
132
+ wcwidth==0.2.5
133
+ webencodings==0.5.1
134
+ werkzeug==2.0.2; python_version >= '3.6'
135
+ wheel==0.37.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
136
+ widgetsnbextension==3.5.2
137
+ xxhash==2.0.2; python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'
138
+ yarl==1.7.2; python_version >= '3.6'
139
+ zipp==3.6.0; python_version < '3.10'