File size: 1,924 Bytes
b648e3b
ece53af
b648e3b
ece53af
 
 
 
679fc29
ece53af
06dfdf9
ece53af
 
 
 
489f282
ad86388
ece53af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
import streamlit as st
from st_pages import Page, show_pages

st.set_page_config(page_title="Information Retrieval", page_icon="🏠")

show_pages(
    [
        Page("app.py", "Home", "🏠"),
        Page(
            "Information_Retrieval.py", "Information Retrieval", "📝"
        ),
    ]
)

st.title("Project in Text Mining and Application")
st.header("Information Retrieval use a pre-trained model - ELECTRA")
st.markdown(
    """
    **Team members:**
    | Student ID | Full Name                | Email                          |
    | ---------- | ------------------------ | ------------------------------ |
    | 1712603    | Lê Quang Nam             | [email protected]   |
    | 19120582   | Lê Nhựt Minh             | [email protected]  |
    | 19120600   | Bùi Nguyên Nghĩa         | [email protected]  |
    | 21120198   | Nguyễn Thị Lan Anh       | [email protected]  |
    """
)

st.header("The Need for Information Retrieval")
st.markdown(
    """
    The task of classifying whether a question and a context paragraph are related to 
    each other is based on two main steps: word embedding and classifier. Both of these 
    steps together constitute the process of analyzing and evaluating the relationship 
    between the question and the context.
    """
)

st.header("Technology used")
st.markdown(
    """
    The ELECTRA model, specifically the "google/electra-small-discriminator" used here, 
    is a deep learning model in the field of natural language processing (NLP) developed 
    by Google. This model is an intelligent variation of the supervised learning model 
    based on the Transformer architecture, designed to understand and process natural language efficiently.
    For this text classification task, we choose two related classes: ElectraTokenizer and 
    FElectraForSequenceClassification to implement.
    """
)