Datasets:

Languages:
Bengali
ArXiv:
License:
abhik1505040 commited on
Commit
bf70c74
1 Parent(s): 7d5b0ab

Added initial files

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +190 -0
  3. data/squad_bn.tar.bz2 +3 -0
  4. squad_bn.py +120 -0
.gitattributes CHANGED
@@ -36,3 +36,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
36
  *.mp3 filter=lfs diff=lfs merge=lfs -text
37
  *.ogg filter=lfs diff=lfs merge=lfs -text
38
  *.wav filter=lfs diff=lfs merge=lfs -text
 
 
36
  *.mp3 filter=lfs diff=lfs merge=lfs -text
37
  *.ogg filter=lfs diff=lfs merge=lfs -text
38
  *.wav filter=lfs diff=lfs merge=lfs -text
39
+ data/squad_bn.tar.bz2 filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - found
6
+ multilinguality:
7
+ - monolingual
8
+ size_categories:
9
+ - 100K<n<1M
10
+ source_datasets:
11
+ - extended
12
+ task_categories:
13
+ - text-classification
14
+ task_ids:
15
+ - natural-language-inference
16
+ languages:
17
+ - bn
18
+ licenses:
19
+ - cc-by-nc-sa-4.0
20
+ ---
21
+
22
+ # Dataset Card for `xnli_bn`
23
+
24
+ ## Table of Contents
25
+ - [Dataset Card for `xnli_bn`](#dataset-card-for-xnli_bn)
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Usage](#usage)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
41
+ - [Annotations](#annotations)
42
+ - [Annotation process](#annotation-process)
43
+ - [Who are the annotators?](#who-are-the-annotators)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+
55
+ ## Dataset Description
56
+
57
+ - **Repository:** [https://github.com/csebuetnlp/banglabert](https://github.com/csebuetnlp/banglabert)
58
+ - **Paper:** [**"BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding"**](https://arxiv.org/abs/2101.00204)
59
+ - **Point of Contact:** [Tahmid Hasan](mailto:[email protected])
60
+
61
+ ### Dataset Summary
62
+
63
+ This is a Natural Language Inference (NLI) dataset for Bengali, curated using the subset of
64
+ MNLI data used in XNLI and state-of-the-art English to Bengali translation model introduced **[here](https://aclanthology.org/2020.emnlp-main.207/).**
65
+
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ [More information needed](https://github.com/csebuetnlp/banglabert)
70
+
71
+ ### Languages
72
+
73
+ * `Bengali`
74
+
75
+ ### Usage
76
+ ```python
77
+ from datasets import load_dataset
78
+ dataset = load_dataset("csebuetnlp/xnli_bn")
79
+ ```
80
+ ## Dataset Structure
81
+
82
+ ### Data Instances
83
+
84
+ One example from the dataset is given below in JSON format.
85
+ ```
86
+ {
87
+ "sentence1": "আসলে, আমি এমনকি এই বিষয়ে চিন্তাও করিনি, কিন্তু আমি এত হতাশ হয়ে পড়েছিলাম যে, শেষ পর্যন্ত আমি আবার তার সঙ্গে কথা বলতে শুরু করেছিলাম",
88
+ "sentence2": "আমি তার সাথে আবার কথা বলিনি।",
89
+ "label": "contradiction"
90
+ }
91
+ ```
92
+
93
+ ### Data Fields
94
+
95
+ The data fields are as follows:
96
+
97
+ - `sentence1`: a `string` feature indicating the premise.
98
+ - `sentence2`: a `string` feature indicating the hypothesis.
99
+ - `label`: a classification label, where possible values are `contradiction` (0), `entailment` (1), `neutral` (2) .
100
+
101
+ ### Data Splits
102
+ | split |count |
103
+ |----------|--------|
104
+ |`train`| 381449 |
105
+ |`validation`| 2419 |
106
+ |`test`| 4895 |
107
+
108
+
109
+
110
+
111
+ ## Dataset Creation
112
+
113
+ The dataset curation procedure was the same as the [XNLI](https://aclanthology.org/D18-1269/) dataset: we translated the [MultiNLI](https://aclanthology.org/N18-1101/) training data using the English to Bangla translation model introduced [here](https://aclanthology.org/2020.emnlp-main.207/). Due to the possibility of incursions of error during automatic translation, we used the [Language-Agnostic BERT Sentence Embeddings (LaBSE)](https://arxiv.org/abs/2007.01852) of the translations and original sentences to compute their similarity. All sentences below a similarity threshold of 0.70 were discarded.
114
+
115
+ ### Curation Rationale
116
+
117
+ [More information needed](https://github.com/csebuetnlp/banglabert)
118
+
119
+ ### Source Data
120
+
121
+ [XNLI](https://aclanthology.org/D18-1269/)
122
+
123
+ #### Initial Data Collection and Normalization
124
+
125
+ [More information needed](https://github.com/csebuetnlp/banglabert)
126
+
127
+
128
+ #### Who are the source language producers?
129
+
130
+ [More information needed](https://github.com/csebuetnlp/banglabert)
131
+
132
+
133
+ ### Annotations
134
+
135
+ [More information needed](https://github.com/csebuetnlp/banglabert)
136
+
137
+
138
+ #### Annotation process
139
+
140
+ [More information needed](https://github.com/csebuetnlp/banglabert)
141
+
142
+ #### Who are the annotators?
143
+
144
+ [More information needed](https://github.com/csebuetnlp/banglabert)
145
+
146
+ ### Personal and Sensitive Information
147
+
148
+ [More information needed](https://github.com/csebuetnlp/banglabert)
149
+
150
+ ## Considerations for Using the Data
151
+
152
+ ### Social Impact of Dataset
153
+
154
+ [More information needed](https://github.com/csebuetnlp/banglabert)
155
+
156
+ ### Discussion of Biases
157
+
158
+ [More information needed](https://github.com/csebuetnlp/banglabert)
159
+
160
+ ### Other Known Limitations
161
+
162
+ [More information needed](https://github.com/csebuetnlp/banglabert)
163
+
164
+ ## Additional Information
165
+
166
+ ### Dataset Curators
167
+
168
+ [More information needed](https://github.com/csebuetnlp/banglabert)
169
+
170
+ ### Licensing Information
171
+
172
+ Contents of this repository are restricted to only non-commercial research purposes under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/). Copyright of the dataset contents belongs to the original copyright holders.
173
+ ### Citation Information
174
+
175
+ If you use the dataset, please cite the following paper:
176
+ ```
177
+ @misc{bhattacharjee2021banglabert,
178
+ title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
179
+ author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
180
+ year={2021},
181
+ eprint={2101.00204},
182
+ archivePrefix={arXiv},
183
+ primaryClass={cs.CL}
184
+ }
185
+ ```
186
+
187
+
188
+ ### Contributions
189
+
190
+ Thanks to [@abhik1505040](https://github.com/abhik1505040) and [@Tahmid](https://github.com/Tahmid04) for adding this dataset.
data/squad_bn.tar.bz2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cb33684f2ba0afd68bc0c4e9ec86d5960daa0bb2434c37e062b4758bbc3d6b9
3
+ size 8432345
squad_bn.py ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SQuAD Bengali Dataset"""
2
+
3
+ import os
4
+ import json
5
+
6
+ import datasets
7
+ from datasets.tasks import QuestionAnsweringExtractive
8
+
9
+
10
+ _CITATION = """\
11
+ @misc{bhattacharjee2021banglabert,
12
+ title={BanglaBERT: Combating Embedding Barrier in Multilingual Models for Low-Resource Language Understanding},
13
+ author={Abhik Bhattacharjee and Tahmid Hasan and Kazi Samin and Md Saiful Islam and M. Sohel Rahman and Anindya Iqbal and Rifat Shahriyar},
14
+ year={2021},
15
+ eprint={2101.00204},
16
+ archivePrefix={arXiv},
17
+ primaryClass={cs.CL}
18
+ }
19
+ """
20
+
21
+ _DESCRIPTION = """\
22
+ SQuAD-bn is derived from the SQuAD-2.0 and TyDI-QA datasets.
23
+ """
24
+
25
+ _HOMEPAGE = "https://github.com/csebuetnlp/banglabert"
26
+ _LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0)"
27
+ _URL = "https://huggingface.co/datasets/csebuetnlp/squad_bn/resolve/main/data/squad_bn.tar.bz2"
28
+ _VERSION = datasets.Version("0.0.1")
29
+
30
+
31
+
32
+ class SquadBn(datasets.GeneratorBasedBuilder):
33
+ """SQuAD Bengali Dataset"""
34
+
35
+ BUILDER_CONFIGS = [
36
+ datasets.BuilderConfig(
37
+ name="squad_bn",
38
+ version=_VERSION,
39
+ description=_DESCRIPTION,
40
+ )
41
+ ]
42
+
43
+ def _info(self):
44
+ return datasets.DatasetInfo(
45
+ description=_DESCRIPTION,
46
+ features=datasets.Features(
47
+ {
48
+ "id": datasets.Value("string"),
49
+ "title": datasets.Value("string"),
50
+ "context": datasets.Value("string"),
51
+ "question": datasets.Value("string"),
52
+ "answers": datasets.features.Sequence(
53
+ {
54
+ "text": datasets.Value("string"),
55
+ "answer_start": datasets.Value("int32"),
56
+ }
57
+ ),
58
+ }
59
+ ),
60
+ supervised_keys=None,
61
+ homepage=_HOMEPAGE,
62
+ license=_LICENSE,
63
+ citation=_CITATION,
64
+ task_templates=[
65
+ QuestionAnsweringExtractive(
66
+ question_column="question", context_column="context", answers_column="answers"
67
+ )
68
+ ],
69
+ )
70
+
71
+ def _split_generators(self, dl_manager):
72
+ """Returns SplitGenerators."""
73
+ data_dir = os.path.join(dl_manager.download_and_extract(_URL), "squad_bn")
74
+ return [
75
+ datasets.SplitGenerator(
76
+ name=datasets.Split.TRAIN,
77
+ gen_kwargs={
78
+ "filepath": os.path.join(data_dir, "train.json"),
79
+ },
80
+ ),
81
+ datasets.SplitGenerator(
82
+ name=datasets.Split.TEST,
83
+ gen_kwargs={
84
+ "filepath": os.path.join(data_dir, "test.json"),
85
+ },
86
+ ),
87
+ datasets.SplitGenerator(
88
+ name=datasets.Split.VALIDATION,
89
+ gen_kwargs={
90
+ "filepath": os.path.join(data_dir, "validation.json"),
91
+ },
92
+ ),
93
+ ]
94
+
95
+ def _generate_examples(self, filepath):
96
+ """Yields examples as (key, example) tuples."""
97
+
98
+ with open(filepath, encoding="utf-8") as f:
99
+ data = json.load(f)
100
+ for example in data["data"]:
101
+ title = example.get("title", "")
102
+ for paragraph in example["paragraphs"]:
103
+ context = paragraph["context"].strip()
104
+ for qa in paragraph["qas"]:
105
+ question = qa["question"].strip()
106
+ id_ = qa["id"]
107
+
108
+ answer_starts = [answer["answer_start"] for answer in qa["answers"]]
109
+ answers = [answer["text"].strip() for answer in qa["answers"]]
110
+
111
+ yield id_, {
112
+ "title": title,
113
+ "context": context,
114
+ "question": question,
115
+ "id": id_,
116
+ "answers": {
117
+ "answer_start": answer_starts,
118
+ "text": answers,
119
+ },
120
+ }