Subjects: Library Science,Information Science >> Library Science submitted time 2023-07-26 Cooperative journals: 《图书情报工作》
Abstract: [Purpose/significance] The statistics and analysis of sentence length in different dimensions and vocabulary distribution based on the New Era People's Daily(NEPD) word segmentation corpus is not only conducive to a relatively comprehensively and systematically understanding of the linguistic characteristics of the contemporary Chinese text, but also beneficial to the subsequent exploration of natural language processing and text mining of the text.[Method/process] Based on the word segmentation data of People's Daily in January 2018 and the word segmentation data of People's Daily in January 1998, 6 sentence categories used in the statistics were determined, and the sentence length distribution of character and word units was counted and analyzed, and the distribution of words in static state was revealed based on Zipf's law.[Result/conclusion] From the perspective of the sentence length distribution in the word dimension and the Zipf distribution of vocabulary, the sentence length and vocabulary distribution have both changed in the 1998 and 2018 corpora as time goes by, but this change is continuous and related.
Subjects: Library Science,Information Science >> Library Science submitted time 2023-07-26 Cooperative journals: 《图书情报工作》
Abstract: [Purpose/significance] On the basis of the new era People's Daily(NEPD) word segmentation corpus, the construction of the automatic word segmentation model of deep learning not only can help to provide relevant experience for the construction of high-performance word segmentation model, but also can verify the performance of the corresponding model of deep learning through specific natural language processing tasks.[Method/process] Based on the introduction of Bi-directional Long Short-Term Memory (Bi-LSTM) and Bi-directional Long Short-Term Memory with conditional random field (Bi-LSTM-CRF), this paper expounded the process, type and situation of Chinese word segmentation preprocessing, the evaluation indexes and parameters and hardware platform, the Bi-LSTM and Bi-LSTM-CRF Chinese automatic word segmentation models were constructed respectively, and the overall performance of the models was analyzed.[Result/conclusion] The overall performance of the Bi-LSTM and Bi-LSTM-CRF Chinese automatic word segmentation model is relatively reasonable from the three indexes of precision, recall and F value. In terms of specific performance, Bi-LSTM word segmentation model is superior to Bi-LSTM-CRF word segmentation model, but the difference is very small.
Subjects: Library Science,Information Science >> Library Science submitted time 2023-07-26 Cooperative journals: 《图书情报工作》
Abstract: [Purpose/significance] The construction of the segmented corpus of People's Daily in line with the new era provides new annotated corpus for Chinese information processing, and also offers new language resources for analyzing modern Chinese from a diachronic perspective.[Method/process] The data source, annotation specification and process of the constructed corpus were explained on the basis of analyzing the existing Chinese word segmentation corpus, on the other hand, the corpus performance was evaluated by constructing the automatic word segmentation model by comparing with the existing corpus.[Result/conclusion] The New Era People's Daily Segmented Corpus(NEPD) with a large scale and a long time span follows the basic processing standards of modern Chinese corpus. The part of January 2018 is selected from NEPD to build a segmentation model based on conditional random field model. The performance of the corpus of People's Daily in January 2018 is evaluated and compared with that of the corpus of People's Daily in January 1998. The specific evaluation indexes obtained from the corpus show that the overall performance of the corpus of People's Daily in the new era is relatively outstanding. The corpus of 1998 could not be replaced, but it is very necessary to construct the NEPD.
Subjects: Library Science,Information Science >> Information Science submitted time 2017-12-05 Cooperative journals: 《数据分析与知识发现》
Abstract:【目的】探究与设计基于图书情报领域、中文科技论文图表摘要构建的结构, 并制定构建规则。【方法】 通过调研的方法, 结合人工标注结果及图情领域中文科技论文、图表的特征, 设计摘要框架并规定构建规则, 最 终设计评测系统, 基于 SPSS 统计结果分析揭示该摘要系统的表现。【结果】本研究构建的图表摘要在图片信息 理解程度、效率、确信度等维度上的表现均优于现有图片–文本组合模式。【局限】图片信息覆盖率有待提高、 未考虑清楚图表类型所带来的差异、未完全实施自动化标引。【结论】依据本研究设计的中文科技论文图表摘要 构建结构与规则所形成的图表摘要能有效提高用户对文献主要内容的准确理解度。
Subjects: Library Science,Information Science >> Information Science submitted time 2017-11-08 Cooperative journals: 《数据分析与知识发现》
Abstract:【目的】从大规模食品安全事件当中抽取食品安全事件实体。【方法】基于已发生的食品安全事件, 结合情报学数据获取、标注和组织的方法, 融合食品安全事件实体的多种分布特征知识, 通过条件随机场模型, 构建食品安全事件语料并从中抽取相应的实体。【局限】在食品安全事件实体抽取过程中所制定的特征模板在领域化迁移上具有一定的局限性。【结果】在已有1500万字经过标注的食品安全事件语料的规模上, 通过统计食品安全事件实体的内部和外部特征, 基于条件随机场机器学习模型, 构建了食品安全实体的抽取模型, 该模型最高的F 值达到91.94%。【结论】通过对食品安全事件实体抽取结果的分析, 在食品这一领域化的语料上, 基于条件随机场进行实体抽取是可行的。
Subjects: Library Science,Information Science >> Information Science submitted time 2017-11-08 Cooperative journals: 《数据分析与知识发现》
Abstract:【目的】在食品安全领域中, 建立相关数据库对食品安全的监管和控制都会有很大的帮助, 自动分词在构建索引、使用索引以及构建语料库中都起到至关重要的作用。将基于条件随机场的字标注统计学习方法, 应用在食品安全突发事件语料的自动分词中。【方法】分析语料的词长分布等特点, 对该方法自动分词过程中所涉及的特征选择和特征模板进行不同实验, 得出不同特征选择和应用不同特征模板对分词结果的影响。【结果】从实验结果可以看出, 特征选择时并不是特征越多分词效果越好, 会出现特征干扰的情况, 在二三字词占46.62%的食品安全突发事件语料中, 特征模板中的当前字和前后驱第一个字所代表的特征模板对分词效果影响明显。【结论】通过对不同特征选择和特征模板及其相互组合的实验, 选择出在本文研究的语料库自动分词中最优的特征和特征模板, 在5Tag 特征标记下配合对应特征模板对目标语料分词的F 值达到92.88%。
Subjects: Library Science,Information Science >> Information Science submitted time 2017-11-08 Cooperative journals: 《数据分析与知识发现》
Abstract:【目的】在食品安全领域中, 建立相关数据库对食品安全的监管和控制都会有很大的帮助, 自动分词在构建索引、使用索引以及构建语料库中都起到至关重要的作用。将基于条件随机场的字标注统计学习方法, 应用在食品安全突发事件语料的自动分词中。【方法】分析语料的词长分布等特点, 对该方法自动分词过程中所涉及的特征选择和特征模板进行不同实验, 得出不同特征选择和应用不同特征模板对分词结果的影响。【结果】从实验结果可以看出, 特征选择时并不是特征越多分词效果越好, 会出现特征干扰的情况, 在二三字词占46.62%的食品安全突发事件语料中, 特征模板中的当前字和前后驱第一个字所代表的特征模板对分词效果影响明显。【结论】通过对不同特征选择和特征模板及其相互组合的实验, 选择出在本文研究的语料库自动分词中最优的特征和特征模板, 在5Tag 特征标记下配合对应特征模板对目标语料分词的F 值达到92.88%。