[1]杨 起,刘牧耕,马 郓.一种面向UI 手稿识别的数据集制作方法[J].郑州大学学报(工学版),2022,43(06):1-7.[doi:10.13705/j.issn.1671-6833.2022.06.009]
 YANG Qi,LIU Mugeng,MA Yun.An Efficient Approach to Creating Hand-Drawn Dataset for UI Manuscript Recognition[J].Journal of Zhengzhou University (Engineering Science),2022,43(06):1-7.[doi:10.13705/j.issn.1671-6833.2022.06.009]
点击复制

一种面向UI 手稿识别的数据集制作方法()
分享到:

《郑州大学学报(工学版)》[ISSN:1671-6833/CN:41-1339/T]

卷:
43
期数:
2022年06期
页码:
1-7
栏目:
出版日期:
2022-09-02

文章信息/Info

Title:
An Efficient Approach to Creating Hand-Drawn Dataset for UI Manuscript Recognition
作者:
杨 起1 刘牧耕2 马 郓3
1.北京大学深圳研究生院信息工程学院;2.北京大学计算机学院;3.北京大学人工智能研究院;

Author(s):
YANG Qi1 LIU Mugeng2 MA Yun3
1.School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; 
2.Department of Computer Science, Peking University, Beijing 100871, China; 
3. Institute for Artificial Intelligence, Peking University, Beijing 100871, China
关键词:
Keywords:
intelligent software developing service UI manuscript recognition object detection dataset data enhancement
分类号:
TP311;O244
DOI:
10.13705/j.issn.1671-6833.2022.06.009
文献标志码:
A
摘要:
UI 手稿识别是图像目标检测技术在软件工程领域的重要应用。 由于 UI 手稿图像与自然图像有着较大的差异,而且主要依靠人工绘制,所以制作用于深度学习模型训练的 UI 手稿数据集往往比较困难,耗费大量人力。 针对此问题,通过对 UI 手稿数据集的制作流程进行优化改进,提出了一种 UI 手稿数据集快速制作方法 UIsketcher。 在 UIsketcher 方法中,用户只需要完成一些基础 UI 组件的绘制,不需要任何框选标注,即可自动生成用于深度学习模型训练的数据集。 与传统方法进行对比实验,结果表明:用户只需要绘制相对于传统方法 25%的组件数量,即可得到相似的训练效果;若绘制传统方法 75%的组件数量,训练效果将更好,可达到比传统方法更高的准确率。
Abstract:
UI manuscript recognition is one of the important applications of image object detection in the area of software engineering. Due to the significant difference between UI manuscript images and natural images where UI manuscript images usually need to be drawn manually, it is difficult to build UI manuscript dataset for deep learning because of the dependency on tremendous manual efforts. To address the issue, in this study an approach called UIsketcher was proposed to efficiently generate UI manuscript dataset based on optimizing the current workflow. In UIsketcher, users should just draw some basic elements without labeling, and then the dataset could be automatically generated for training deep learning model. According to the experiment with UIsketcher, only 25% drawing workload of the traditional methods could get the similar training results. If the workload was 75%, the final accuracy was even better than that of traditional methods.

参考文献/References:

[1] 魏宏彬, 张端金, 杜广明, 等. 基于改进型 YOLO v3 的蔬菜识别算法[ J] . 郑州大学学报( 工学版) , 2020, 41(2) : 7-12, 31. 

WEI H B, ZHANG D J, DU G M, et al. Vegetable recognition algorithm based on improved YOLOv3[ J] . Journal of Zhengzhou university ( engineering science) , 2020, 41(2) : 7-12, 31. 
[2] 楼 豪 杰, 郑 元 林, 廖 开 阳, 等. 基 于 SiameseYOLOv4 的印刷品缺陷目标检测[ J] . 计算机应用, 2021, 41(11) : 3206-3212. 
LOU H J, ZHENG Y L, LIAO K Y, et al. Defect target detection for printed matter based on SiameseYOLOv4[ J] . Journal of computer applications, 2021, 41(11) : 3206-3212. 
[3] 丁明宇, 牛玉磊, 卢志武, 等. 基于深度学习的图 片中商品参数识别方法[ J] . 软件学报, 2018, 29 (4) : 1039-1048. 
DING M Y, NIU Y L, LU Z W, et al. Deep learning for parameter recognition in commodity images [ J ] . Journal of software, 2018, 29(4) : 1039-1048. 
[4] ALES Z, LUKÁS P, ANTONÍN R. Sketch2Code: automatic hand-drawn UI elements detection with faster R-CNN[EB / OL] . ( 2020 - 09 - 22) [ 2021 - 04 - 12] . http:∥ceur-ws. org / Vol-2696 / paper_82. pdf.
 [5] WIMMER C, UNTERTRIFALLER A, GRECHENIG T. SketchingInterfaces: a tool for automatically generating high-fidelity user interface mockups from handdrawn sketches [ C ] ∥32nd Australian Conference on Human-Computer Interaction. New York: ACM, 2020: 538-545. 
[6] JOÃO S F, ANDRÉ R, HUGO S F. Automatically generating websites from hand-drawn mockups [ C] ∥ International Conference on Computer Vision Theory and Applications. Vienna: VISAPP,2021: 48-58. 
[7] FICHOU D, BERARI R, BRIE P, et al. Overview of the 2020 imageCLEFdrawnUI task: detection and recognition of hand drawn website UIs[EB / OL] . ( 2020- 09-22) [ 2021 - 04 - 12] . http:∥ceur-ws. org / Vol - 2696 / paper_245. pdf. 
[8] CHEN J S, XIE M L, XING Z C, et al. Object detection for graphical user interface: old fashioned or deep learning or a combination[C]∥Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. New York: ACM, 2020: 1202-1214. 
[9] GHIASI G, CUI Y, SRINIVAS A, et al. Simple copypaste is a strong data augmentation method for instance segmentation [ C ] ∥2021 IEEE / CVF Conference on Computer Vision and Pattern Recognition ( CVPR) . Piscataway: IEEE, 2021: 2917-2927. 
[10] REN S Q, HE K M, GIRSHICK R, et al. Faster RCNN: towards real-time object detection with region proposal networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2017, 39(6): 1137-1149. 
[11] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: unified, real-time object detection[ C] ∥2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2016: 779-788. 
[12] ULTRALYTICS. YOLOv5 [ EB / OL ] . [ 2021 - 04 - 12] . https:∥github. com / ultralytics/ yolov5.

更新日期/Last Update: 2022-10-03