Zhang et al. / Front Inform Technol Electron Eng in press 1

Frontiers of Information Technology & Electronic Engineering www.jzus.zju.edu.cn; engineering.cae.cn; www.springerlink.com ISSN 2095-9184 (print); ISSN 2095-9230 (online) E-mail: [email protected]

Visual knowledge guided intelligent generation of Chinese carving∗

Kejun ZHANG†1, Rui ZHANG†1, Yehang YIN1, Yifei LI2, Wenqi WU1, Lingyun SUN1, Fei WU1, Huanghuang DENG1, Yunhe PAN†‡1 1State Key Lab of CAD & CG, University, Hangzhou 310027, 2School of software Technology, Zhejiang University, Hangzhou 310027, China †E-mail: [email protected]; [email protected]; [email protected] Received Feb. 22, 2021; Revision accepted June 20, 2021; Crosschecked

Abstract: We digitally reproduced the process of resource collaboration, design creation, and visual presentation of Chinese seal-carving art. We developed an intelligent seal-carving art-generation system (Zhejiang University Intelligent Seal-Carving System: http://www.next.zju.edu.cn/seal/; The website of the seal-carving search and layout system: http://www.next.zju.edu.cn/seal/search_app/) to solve the difficulties using a visual knowledge- guided computational art approach. The knowledge base in this study is the Qiushi seal-carving database, which consists of open datasets of images of seal characters and seal stamps. We proposed a seal character-generation method based on visual knowledge and guided by the database and expertise. Further, to create the layout of the seal, we proposed a deformation algorithm to adjust the seal characters and calculate layout parameters from the database and knowledge to achieve an intelligent structure. Experiments show that this method and system can effectively solve the difficulties in the generation of seal-carving. Our work provides theoretical and applied references for the rebirth and innovation of seal-carving art. Key words: Seal-carving; Intelligentunedited generation; Deep learning; Parametric modeling; Computational art https://doi.org/10.1631/FITEE.2100094 CLC number:

1 Introduction became a popular form of art among the literati (Gu, 2013). Sealcarving is the art of engraving Chinese The art of sealcarving adheres to classical con characters on seals, and it has a 3000year history. ventions, and the is the major script style The seal was initially used as a practical tool that used for sealcarving. The correctness of the char performs credibility authentication in political and acters’ orthography is traditionally an important as economic activities. The discovery and populariza pect of the sealcarving technique (Li, 2009), and the tion of ophicalcite enabled the literati to selfseal and commonly used seal script typologies are very differ carve. Due to the rise of seals for signing on art col ent from modern Chinese characters. Chinese char lections, the function of sealcarving began to change acters’ structural and stroke features have changed from practical to artistic appreciation. Over time, it so much that the seal script can no longer be con ‡ Corresponding author sidered a style of modern Chinese characters (Wang, * Project supported by the National Key Research and 1980). Seal script information processing technol Development Program of China (No. 2017YFB1402104) and the Fundamental Research Funds for the Central Universities ogy is slowly evolving, including industry standards (No. 2020QNA5023) for encoding (Unicode Consortium, 2020), recogni ORCID: Kejun ZHANG, https://orcid.org/0000-0002-0778- 2303 tion, and glyph production. Seal script, on the other c Zhejiang University Press 2021 hand, can still be seen in historical locations, cultural 2 Zhang et al. / Front Inform Technol Electron Eng in press artifacts, and antiques, as well as in books and cal of a standard script. The database of seal charac ligraphy works (Qiu et al., 2000). Previous studies ters and seal stamps offers essential visual knowl of Chinese sealcarving art have primarily focused edge, which guides the generation of seal characters on the identification and authentication of the en and the layout of a seal. Fig. 1 depicts the flow chart tire seal (Fan and Tsai, 1984; Chen, 1995, 1996; Su, of the intelligent age of sealcarving. Our study can 2007a,b), but there have been few studies on the also be applied to other forms of art and design of production of seal characters. Compared with Chi Chinese characters, thereby serving the cultural and nese character calligraphy and other art forms, seal creative industry as well as inheriting and promoting carving art remains timeconsuming, economically the traditional culture. costly, and laborious. Therefore, exploring the intel ligent generation of sealcarving art could improve 2 Related studies the efficiency and quality of sealcarving art creation and assist in reviving this ancient art form. It would 2.1 Generation of characters also lower the cost of sealcarving, enhance the qual ity of seal products, and make sealcarving art more 2.1.1 Based on interaction accessible to people. Most of the early studies conducted on charac The intelligent generation of sealcarving art is ter generation were based on interactive methods. complex and challenging, requiring multiple disci Artists undertook this type of study, and comput plines across computer science, such as data science, ers offer assistance in improving human design and computer vision, computer graphics, and human artistic creation efficiency (Zhang, 2019). According computer interaction. However, “visual knowledge” to the study content, it can be roughly divided into (Pan, 2019), as a new form of knowledge represen two types: Chinese calligraphy generation and font tation, can guide many tasks on computer vision, design. including the generation of sealcarving. This paper Researchers mainly study the digital modeling proposes a method to generate a seal with a specific of calligraphy tools for Chinese calligraphy genera style and appropriate layoutunedited from a simple format tion, including virtual brush modeling and ink dif

Final seal Generation of Layout of seal seal characters

Layout of standard script Guide Guide

000 1-001 .png 000 1-002 .png 000 1-003 .png 000 1-011 .png 000 1-012 .png 000 1-015 .png 03-005-1.png 03-005-2.png 03-005-3.png 03-005-4.png

000 5-002 .png 000 5-003 .png 000 5-004 .png 000 2-004 .png 000 2-006 .png 000 2-007 .png

000 5-001 .png 000 5-011 .png 000 5-014 .png 000 5-019 .png 000 6-001 .png 000 8-003 .png 03-006-2.png 03-006-3.png 03-006-4.png 03-007-3.png Seal characters Seal stamps

Database and knowledge

Fig. 1 Flow chart for intelligent generation of Chinese seal-carving Zhang et al. / Front Inform Technol Electron Eng in press 3 fusion simulation. Wang and Pang (1986) proposed professional knowledge and still requires a lot of user a computerbased Chinese calligraphy system. They work. simulated writing brushes, built a library of strokes, and used humancomputer interaction to combine 2.1.2 Based on graphics characters. In the same year, Strassmann (1986) Glyph generation based on graphical methods studied virtual brush modeling. The author di generally starts at the font component level (such vided the functions of the brushwriting process into as radicals, strokes, and skeletons). Then, it au the brush, stroke, dip, and paper and parameter tomatically generates glyphs through various tech ized them separately. After that, more researchers niques, such as parameterized representation, com adopted various methods of modeling. For virtual ponent mapping, and statistical models (Zhang, brush modeling, some researchers start from the con 2019). Moreover, the objects of glyph generation are tact shape of the brush and paper and employ the mostly modern characters, such as regular, cursive, scattered point set (Yu et al., 1996), ellipse (Wong and running scripts, and even personalized hand and Ip, 2000), raindrop (Mi et al., 2002; Bi and Tang, writing. Unfortunately, there are very few studies of 2003), etc. The user specifies the crucial locations glyph generation of seal script. for the brush’s movement trajectory to make calli Xu (2007) and Xu et al. (2009) introduced a graphic characters; however, the parameter settings calligraphy creation method based on analogical and are extremely complex. Xu et al. (2002) and Gir integrated reasoning. A matching model based on shick (2004) developed a generation model based on strokes was created using a hierarchical parameter the movement of the mouse. (Lu et al., 2013) pro representation of characters. Furthermore, the spe posed a datadriven calligraphy and painting system, cific process disassembles the characters to be gen RealBrush, which can synthesize more colorful and erated and obtains a new character; the matching texture effects. Some researchers also applied me model chooses similar strokes to match the corre chanical methods to model brushes from a physi sponding topological structure. Shi et al. (2014) cal viewpoint (Lee, 1999; Saito and Nakajima, 1999; modeled Chinese character components, constructed Baxter et al., 2001)). In addition, Chu and Chiew a dynamic Bayes model, and adjusted character gen Lan Tai (2004) and Chu and Tai (2005) installed eration by applying the condition equation. Based sensors on the brush to simulate the movement of on texture mapping, Yu and Peng (2005) proposed the brush in threedimensional space. For the ink a method for generating cursive script, and Markov diffusion effect, Guo and Kuniiunedited (1991) considered interpolation was used for texture synthesis. Dong the paper fiber structure to simulate the dynamic et al. (2008) considered a calligraphy simulation diffusion of ink, and Lee (1999) proposed a Chinese based on statistical models. Li et al. (2014) proposed art paper model using a network structure and ink a WFhistogram to measure a characterąŕs topolog diffusion algorithm. ical features and then synthesize topologically con Many font designers choose auxiliary design sys sistent characters. tems for font creation, such as FontCreator (high There is significant research on personalized logic, 2020), FontLab (FontLab, 2020), and Glyphs fonts, in addition to Chinese calligraphy. Zhou et al. (Glyphs, 2020). Chinese font design companies, such (2011) proposed a model based on the mapping be as Founder, have also launched an auxiliary system tween familiar characters and handwriting, and all (Founder group, 2020) dedicated to Chinese font de characters were generated using 20% handwriting. sign. Professional designers often utilize these tools Lian and Xiao (2012) generated handwriting by cre to assist in font creation and adjustment. ating standard word templates, matching handwrit In summary, interactive glyph generation meth ing with familiar characters, and replacing corre ods can generate glyphs in a relatively controllable sponding parts. Additionally, Lin et al. (2014, 2015) manner. Users can control the creation process, and developed a system for generating partial handwrit the resulting glyphs are guaranteed to be straightfor ing. Zong and Zhu (2014) created a component map ward and pleasing (Wang and Pang, 1986). However, ping vocabulary that automatically matched stan the disadvantage of this method is that it is not in dard characters and handwriting. Then, without telligent enough; it depends heavily on the userąŕs any structural input from individuals, they devel 4 Zhang et al. / Front Inform Technol Electron Eng in press oped handwriting in a similar style. In summary, the et al. (2017) used an autoencoding generative ad generation of glyphs using graphical approaches is versarial network to generate ancient calligraphers’ mainly dependent on the parametric representation handwriting. Lian et al. (2018) proposed EasyFont, and modeling of font components; the created char which extracts and classifies strokes before using an acters are usually stable. Although visual methods FFNN to generate characters. Tang et al. (2019) can eventually develop intelligent characters, spe proposed FontRNN to generate characters by using cific expert knowledge is required to disassemble and an encoder and decoder as RNN input. Chang et al. model characters. Furthermore, the number of char (2018a) used the unpaired CycleGAN method to gen acters is limited due to the complexity of disassembly erate characters, which employs two generators and and modeling, and the cost of largescale use is pro two discriminators. Zheng and Zhang (2018) pro hibitively high. posed CocoANN, which uses an adversarial approach to optimize two content and style encoders to gener 2.1.3 Based on machine learning ate characters. The details are shown in Table 1. In recent years, machine learning, especially Three data dimensionsąłamount, paired or not, deep learning, has rapidly developed and is exten and data labeląłcan be used to describe the at sively applied in computer vision, graphics, etc. For tributes of the seal character database. The amount example, numerous researchers have utilized deep of data is mainly reflected in the difference between learning methods for font generation and achieved the type of font and the number of characters in a a series of results (Zhang, 2019). The key tech single font. The more font types there are, the more niques are convolutional neural networks (CNNs), generalized the algorithm; the smaller the amount variational autoencoders (VAEs), generative ad of data, the stronger the creativity of the algorithm. versarial networks (GANs), recurrent neural net Further, the goal of charactergeneration algorithm works (RNNs), and feedforward neural networks research is to use fewer data to obtain better results (FFNNs). on more font types. Paired data mean that there is Tian (2020a) used CNNs to transfer the font a onetoone correspondence between the target and style of a regular script, built a style transfer network reference fonts. The machine can have a more ac using fivelayer convolution, and tested fonts, such as curate understanding when data are input in pairs song and regular scripts, but results were poor. Af into the model. The data label indicates that the ter “pix2pix” (Isola et al., 2017)unedited (a kind of GAN) was algorithm uses information in addition to the image proposed, Tian (2020b) was motivated to conduct information. Algorithms that do not use additional experiments on a regular script, named “zi2zi”, and labels use the glyph image information for training achieved good results. Jiang et al. (2017) proposed on a single font each time; to learn the associations DCFont based on a deep convolutional generative ad and differences between different font categories, al versarial network that learned to generate handwrit gorithms that use font category labels are trained ten characters. Then the author proposed SCFont in on multiple fonts concurrently. Algorithms for high 2019 (Jiang et al., 2019) using highlevel information, level semantic tags, such as radicals, strokes, skele such as skeletons and strokes. Wen et al. (2019) used tons, key points, and time series of critical issues, CNNs and GANs to generate characters before opti can better understand the composition of words and mizing them. To encode content and style separately achieve more accurate results. There are various and generate Chinese characters, (Zhang et al., 2018) models used in glyphgeneration research. GANs utilized the encodermixerdecoder (EMD) frame and CNNs are the most utilized models. In addi work. Sun et al. (2017) used a styleaware variational tion, the generator + discriminator framework and autoencoder (SAVAE) to encode content and style the encoder + decoder framework are most common, separately, and multilevel information such as struc and multimodel combination and multiframe nest tures and radicals. Furthermore, using a hierarchical ing are usually adopted. content generator and discriminator that reads hier In summary, fontgeneration algorithms based archical information, Chang et al. (2018b) employed on machine learning are more intelligent and do not a hierarchical GAN known as the hierarchical adver need any expert knowledge. However, few studies sarial network (HAN) to generate characters. Lyu have focused on sealcarving. This is because the Zhang et al. / Front Inform Technol Electron Eng in press 5

Table 1 References about generation of Chinese characters

Research Method Database Additional Label Model Tian et al. (2016) Rewrite 3000 characters, paired data None CNN Lyu etc al. (2017) AEGN 3000 characters, paired data None GAN Chang et al. (2018) HAN 3000 characters, paired data None GAN Wen et al. (2019) CSR 750 characters, paired data None CNN,GAN Chang et al. (2018) CycleGAN Multiple fonts, each 3000 characters, unpaired data None GAN Zheng et al. (2018) CocoANN Multiple fonts, each 6763 characters, unpaired data None CNN, GAN Tian et al. (2017) zi2zi Multiple fonts, each 3000 characters, paired data Font category GAN Jiang et al. (2017) DCFont Multiple fonts, each 775 characters, paired data Font category GAN Zhang et al. (2018) EMD Multiple fonts, each 1732 characters, paired data Font category CNN Sun et al. (2018) SA-VAE Multiple fonts, each 3000 characters, paired data Radical structure VAE Jiang et al. (2019) SCFont Multiple fonts, each 775 characters, paired data Stroke category CNN, GAN Key points Lian et al. (2018) EasyFont Multiple fonts, each 775 characters, paired data FFNN of skeleton Time series of Tang et al. (2019) FontRNN Multiple fonts, each 775 characters, paired data CNN, RNN key points CNN:, convolutional neural network; EMD: encoder-mixer-decoder network; FFNN: feed-forward neural network; GAN: generative adversarial network; HAN: hierarchical adversarial network; SA-VAE: style-aware variational auto-encoder; VAE: variational auto- encoder seal script used in sealcarving has existed for about us. 3000 years and is very different from standard writ Visual knowledge K can express the dimensions, ings. Because of this uniqueness, most of the research color, texture, spatial shape, and spatial relation cannot be directly applied to seal script. ships of an object. The task of style transfer can be expressed as KY = G(KX ), where KX is the knowl 2.2 Layout of characters edge of style X, KY is the knowledge of style Y , and G is the generator that transforms the styles from There is little research or system development X to Y . In our task, we take the visual knowledge concerning the layout of calligraphy or seals. In of seal carving as KSC. Style X is the layout of the 2004, Leung (2004) nalyzed and synthesized tradi standard script and the style Y is the real seal. A tional Chinese seals, and proposed a method for gen Y X feasible way to achieve KSC = G(KSC ) is getting erating seal images from handwrittenunedited Chinese char paired data of seals with style X and style Y , and acters. Xu (2007b) mainly designed and developed training a deep learning model. However, this sim an interactive calligraphy plaquegeneration system ple method does not effectively use visual knowledge. that allows users to search for images of calligraphy For example, some subknowledge between styles X characters with consistent content from the estab and Y is the same, such as the character skeletons lished calligraphy database; users can adjust the lay on the seals. Some subknowledge can also be ob out of characters on the plaque. Yu (2010) designed tained using graphical and statistical methods, such a system that can calculate the layout based on the as the layout of characters. The division and explicit size of the plaque and unify the size and stroke width expression of this subknowledge can guide and im of calligraphy characters. However, related studies prove the generation significantly. There are many on the layout of characters on the seals are rare. different glyphs of the same characters, and they can hardly be paired on the pixel level, which means the 3 Method endtoend model can barely work. Thus, we propose a novel method guided by vi “Visual knowledge” (Pan, 2019) is a new form of sual knowledge. A visual concept usually has a hier knowledge representation that can guide many visual archical structure. For the visual knowledge of seal tasks, for example, transformation from one visual carving KSC , we construct a hierarchical structure image to another. Pan (2021) takes our intelligent as shown in Fig. 2. Now we have the layout of the X generation of Chinese seal carving as a typical exam standard script KSC, and our target is to get a novel Y ple, and we will show how visual knowledge guides seal KSC. The character set of the standard script is 6 Zhang et al. / Front Inform Technol Electron Eng in press

X Carving KCa Skeleton KSK

X X Style X Seal carving KSC Characters KC X X Seal stamp KS Rendering KR

X Layout KL Transfer

Y Carving KCa Skeleton KSK Style Y Y Y Seal carving KSC Characters KC

Y Y Seal stamp KS Rendering KR

Y Layout KL

Fig. 2 Visual knowledge of seal carving sufficient, but the character’s set of seal lacks many. database, which is our knowledge base for generation Thus, we can use all knowledge of style X, i.e., KX , and layout. as the source rather than style Y . The standard script is written by brush, whereas the characters on First, we collected seal stamp books such as the seals are carved, so their textures are different Zhong Guo Li Dai Yin Feng (Huang and Dao, 1999), but the skeletons are almost the same. Thus, we as Han Yin Fen Yun He Bian (Yuan, 1979) and Dictio sume that the skeleton knowledge between styles X nary of Common Characters for Seal Carving (Liu, X Y 2010). After collecting a large amount of data, and Y is the same, i.e., KSk = KSk. Then we only Y Y we used an intelligent sealcarving imageprocessing need to get KSC from KSk. procedure to construct the database, as depicted in In detail, to geta seal carving KY , we construct SC Fig. 3. After scanning, page segmentation, deskew, a system to get a seal stamp KY and combine it with S matching, singlecharacter segmentation, style clus a carving machine K . To get a seal stamp KY , Ca S tering, standardization, textimage matching, and we generate seal characters KY and calculate the C index creation, the seal book was finally processed layout KY from our seal stamp dataset. In addition, L into standard data classified by style. Because seals we propose a deformation algorithm to combine KY C are often old and their edges are easily worn, it is and KY . Then we constructunedited a paired dataset of the L usually not a complete square when it is stamped incomplete characters KY and their skeletons, and C on the paper. For zhuwen (characters in red), the we train a deep learning model to get rendering KY . R boundary of the seal is often not obvious. We used Now we only need to get KY = KX , which can be Sk Sk pixellevel calculations supplemented by professional easily extracted from standard scripts KX . C knowledge to segment each seal image and the cor responding text in the seal book. Moreover, we cor 4 Seal-carving database and knowledge rected the rotation of the stamp using the Hough transform, i.e., deskew. Generally, the seal image Visual learning of seals is the prominent issue for contains multiple characters, and to facilitate gen intelligent generation of Chinese seal carving, so we eration research, we separated the characters from constructed the Qiushi SealCarving Database as the images using a crowdsourcing platform. This study knowledge base. The visual knowledge of seal stamps applies OCR, supplemented with manual checking, KS has two parts: seal character KC and layout KL. to obtain the textannotation images for matching Y To get the layout’s knowledge KL , we construct the between the text and singlecharacter image. Con seal stamp dataset, and to get the seal character’s sidering the various sealcarving styles, the data were X Y knowledge KC and KC (incomplete), we build the sorted according to different styles to facilitate fur seal character dataset. We combine computer and ther research. We utilized some key style features, manual methods to get the data and compile them such as thickness, stroke angle distribution, and ver by different kinds of styles. There are about 6,500 tical and horizontal symmetry. We achieved good seal stamps and 70,000 seal characters in the Qiushi results, such as zhuwen and baiwen (characters in Zhang et al. / Front Inform Technol Electron Eng in press 7 Ҁ 䮯 ᷍ ޜ 䮯 ᶘ ѝഭশԓঠ仾㌫ࡇ

ᶘᔲՇ ᡀ ᇼ ᶘ

⿻ ॳ ᶘ ѝ 䀓 ᶘ

Seal stamp books

Scan page segmentation deskew matching

Single character segmentation

Seal character style clustring Seal stamp style clustring

Standardization

0001-001.png 0001-002.png 0001-003.png 0001-011.png 0001-012.png 0001-013.png

03-005-1.png 03-005-2.png 03-005-3.png 0002-004.png 0002-006.png 0002-007.png unedited0005-002.png 0005-003.png 0005-004.png

0005-001.png 0005-011.png 0005-014.png 0005-019.png 0006-001.png 0008-003.png

03-006-2.png 03-006-3.png 03-006-4.png

1408-013.png 1471-012.png 1471-015.png 0005-005.png 0005-013.png 0005-001.png

1661-011.png 1677-005.png 1683-007.png 0025-014.png 0027-002.png 0027-008.png

03-007-3.png 03-007-4.png 03-007-5.png

1709-004.png 1709-006.png 1711-005.png 0063-002.png 0063-003.png 0063-017.png Seal stamp dataset Seal character dataset

Seal character knowledge Layout knowledge

Fig. 3 Construction of the seal-carving database white) binary classification and style clustering. In character seal images. Furthermore, each character the same style, the characters in the twocharacter has a different layout range on the sealing surface seal images are long, but they are square in the four because of the number of strokes, so the size of each 8 Zhang et al. / Front Inform Technol Electron Eng in press character differs. Therefore, a standardized method mon Characters for Seal Carving (Liu, 2010) as style is needed to make the characters within the same X. The skeletons of Miu seal characters in the Dic style class have consistent style features and the same tionary of Common Characters for Seal Carving are size. If we stretch the characters directly, they will be correct and artistic, but the characters are written deformed, and their appearance and even style will with a brush, which is not suitable for seal. As shown be affected. This study applies medial axis transfor in Fig. 5, during training, we extracted the skeleton mation to divide the characters into skeletons and of seal characters on seals and combined the skeletons Y Y distance distributions, and then they are separately KSk and characters KC as pairs. Then we input the adjusted to make the size and thickness uniform. pairs into a GAN model named zi2zi (Tian, 2020b). In addition, this study also supplements the After the training, the model can learn how to ren sealcarving database itself. The datadriven method der a skeleton into a character with the Han Yin of deep learning is applied to intelligently generate style. During generation, we extract the skeletons X seal characters and seals. After selecting highquality KSk of the Miu seal characters in the “Dictionary results and adding them to the sealcarving database, of Common Characters for Seal Carving”, and input the database grows continuously. The designed stan the skeletons into the model to generate the charac dard database is deployed on the seal character re ters. After artifact removal, the characters can be trieval platform. Additionally, users can input a sin used in the seal. Our method divides the skeleton gle character to be retrieved, and the system will knowledge and rendering and develops the charac return the seal image and seal character image with ters with skeletons of Miu seal characters and Han the character (Fig. 4). Yin style.

Font weight This method maintains the skeleton of the orig Seal character inal Miu seal script style. It thus avoids structural ᯦ errors and fusion of neighboring strokes, which are universal in endtoend models. The brushwritten texture is removed on the generated glyphs and therefore fits well into sealcarving aesthetics. The skeletonbased generation method, either the seal script or the modern Chinese characters, is chosen uneditedas the training data. The model’s inference can re Fig. 4 Seal glyph retrieving platform sult in a corresponding style of generated seal glyphs. For example, if we want a “harder” style for seal char acters, we can employ modern Chinese characters in the Gothic font style as style Y for training. The 5 Generation of seal characters shortcoming of this method is that the lack of struc tural changes and the character set of the generated In this chapter, we will show how we obtain the Y glyphs depend on the source. target characters KC . Although we get some tar get characters on the seal from our database, the character set lacks many characters because there 6 Layout are some characters not shown in the existing seal stamp books. Thus, we use these target characters Y Y to learn the rendering KR by a deep learning model. To get the seal stamps KS from the layout of X Y Combining skeletons KSk from the complete charac the seal KL , we first propose a deformation algo X Y ter set KC and the target rendering KR , we obtain rithm to adjust the seal character size and location Y all the target characters KC . while maintaining the character shape. Using this In practice, we set seal characters on seals from deformation algorithm, we can compose the charac Zhong Guo Li Dai Yin Feng (Huang and Dao, 1999) ters with interaction. Then, to make the layout more as style Y , which we called the style of Han Yin. intelligent, we calculate layout parameters from the Then we set characters in the Dictionary of Com database to achieve a smart layout. Zhang et al. / Front Inform Technol Electron Eng in press 9

Training Skeleton Generated glyphs

Skeletonize Encoder Decoder L1 loss

Seal characters on seals Discriminator True/fake loss

Generation Skeleton

Skeletonize Artifacts Encoder Decoder removal

Miu seal characters with brush Fig. 5 Generation model based on skeleton guided by hierarchical visual knowledge

6.1 Deformation algorithm you can scale both S and D to the target size (with the same multiplicator), and then you restore both Based on mathematical morphology and vector S and D to a font image. Moreover, if you want to parameterization, this study proposes an intelligent thicken or thin the font stroke, you can multiply D deformation algorithm as a basis for the character by a value while keeping S constant, and then you layout. Different from pictures, if scaling and other restore S and D to a font image. deformations are simply applied to the glyphs, the The imagebased mathematical morphology strokes will be out of shape, and the structural char based glyph manipulation method needs to recal acteristics will be changed. For example, if you want culate the skeleton and distribution maps of the to compress the Chinese glyphs horizontally, the closest distance many times, so its computational glyphs will become narrower, and the vertical draw cost is very high. Therefore, this study proposes ing should also become thinner to maintain symme a vector parameterized deformation method that try. Therefore, simple scale processing at the pixel has highly efficiency storage and faster image trans level is impossible. unedited mission speed in B/S system applications. More In this study, the method based on mathemati over, this method can be considered because of sam cal morphology mainly uses skeletonization and me pling on the image contour of the method mentioned dial axis transformation. The binarized Chinese above. Particularly, polygons are utilized to fit the character image is converted into a skeleton image contour of the image polygon, the nodes of the poly (denoted by S, which is a matrix with the same size gon contour are recorded, and the skeleton points as the original image) and the distribution map (de closest to these nodes are recorded. As shown in noted by D, which is a matrix with the same size Fig. 6, we label the polygon contour nodeset as as the original image) of the nearest distance from C = {c1, c2, . . . , cn}, and its corresponding skeleton each point in the skeleton image to the glyph out point is labeled S = {s1, s2, . . . , sn}; we construct line. The converted S and D can be restored to the the offset vector of the node on the skeleton point original image by applying a simple image algorithm: O = {oi | oi = ci − si, i = 1, 2, . . . , n}. Thus, the traverse all the skeleton points on S, for each point, manipulation of S and O can lead to changes in the get the corresponding value in D named d, and draw restored polygon outline nodeset. The sequence of a circle on this point with a radius of d; the super nodes and their contours will be saved in advance, position of all the circles is the original image. The and the set of contour nodes can always be restored manipulation of S and D can cause the structure to the glyph image. For example, if you employ this and stroke shape of the corresponding glyph image method to change the font size without changing the to change, respectively. For example, if you want to thickness, you can scale S to the target size without change the font size without changing the thickness, changing O. Another example is to thicken or thin 10 Zhang et al. / Front Inform Technol Electron Eng in press the font stroke by multiplying O by a value. can calculate the norm of the offset vector, which means the thickness of the stroke. After calculating the thickness of all characters with the same style in the seal stamp dataset, we can get the average thick ness of this style. Then we adjust the offset vector’s Glyph outline norm without changing the direction. Finally, the Skeleton characters on the seal have the same thickness of Skeleton point strokes as the chosen styles.

Outline point

Offset vector

Fig. 6 Parameterized model based on hierarchical Before composition Generate composition Fine-tuning composition visual knowledge Fig. 7 User interface for seal composition

This study develops an intelligent layout system Two parameters determine the location and size that enables users to manipulate character deforma of the characters: margin and character spacing. tion and layout in real time and preview the results First, we calculate the convex hull of all the char based on the intelligent deformation algorithm. As acters in the seal (i.e., the yellow lines in Fig. 8). illustrated in Fig. 7, users can first query the corre Second, we calculate the hull of each character in the sponding seal characters, search in the sealcarving seal separately (i.e., the blue lines in Fig. 8). Then database developed by us, add them to the layout we calculate the media axis of the four hulls (i.e., the surface, and then drag the vertices of the characters’ red lines in Fig. 8). The average of twice the distance bounding boxes to adjust the sizes and positions of between the convex hull and the media axis of the the characters on a seal in real time. To ease the seal can represent the margin. The average space user operations, the interactive interface also pro between the hull of one character and the media axis vides an automatic character layout based on the or the convex hull of all characters represents the average layout method. This study obtains the de character spacing of this character. By calculating fault parameters by applyingunedited statistical techniques. the average character spacing of all characters, we The characters can be quickly and evenly arranged can get the final character spacing. on the layout surface, and they will be easier to fine tune based on this arrangement. On the one hand, using the developed intelligent interaction can gen erate complex and diverse layouts of sealengraving prints, thereby avoiding unreasonable character lay Media axis outs. On the other hand, for complex glyphs that do Convex hull of all not exist in some database, users can split them into the characters two or multiple simple glyphs and join them together Hull of one to make complex glyphs. character 6.2 Intelligent layout guided by database and knowledge

Y Fig. 8 Calculation of margin and character spacing In this part, we calculate layout parameters KL from the database and knowledge. After that, we apply these parameters to the layout, and the appro Han Yin usually adopts an average layout, so priate layout is achieved. the media axes of four characters are determined. The first parameter is the thickness of the After calculating the layout parameters above, we stroke. By the parameterized model in Fig. 6, we can get the coordinates of the four points of each Zhang et al. / Front Inform Technol Electron Eng in press 11 character’s bounding box. Then we use the defor for the pix2pix model. Another reason is that char mation algorithm to change the characters’ size and acter layouts on the seal are not simple, making it complete the final layout. hard to match characters on the seal and characters of the standard script. Some seals generated by our 7 Experiment method are not good enough, such as the first one. The character in the bottom right corner is too big, 7.1 Dataset and experimental settings and although we have a parameter to adjust this, the default result is not good enough. The seventh one To evaluate the ability to generate seal carving, is not good either, compared with the truth. we use seal characters on seals from Zhong Guo Li To evaluate these results, we conducted a user Dai Yin Feng (Huang and Dao, 1999) as targetstyle study to compare these three kinds of results. To Y , which we called the style of Han Yin. We use ensure that participants did not know the sealąŕs 3485 seal stamps in total. Then we use layouts of style, we shuffled the seals. There were 31 non characters in the Dictionary of Common Characters specialists (Experiment I) and 20 professionals (Ex for Seal Carving (Liu, 2010) as style X. Finally, we periment II) involved, and we did the statistical anal use 6030 characters and combine them into layouts ysis separately. We selected several metrics for non that are paired with the actual seal stamps. specialists (I): aesthetic harmony, visual balance, We chose pix2pix (Isola et al., 2017) model as texture roughness, stroke spacing, stroke uniformity, the baseline model, and we used a data augmentation and overall aesthetics. For professionals (II), we method to develop more than 9000 paired data. We added another two metrics: variation of strokes and trained the pix2pix model for 200 epochs. space distribution. Every metric has 1ĺC7 points We randomly selected nine seal stamps as the that can be chosen and the higher the point, the testing set and compared the results of both meth better. ods. Because the targets we used are not ground The descriptive statistics were provided in Ta truth for our approach, we recruited many partici ble 2. pants to evaluate the results. To learn more about the difference between the 7.2 Experiment results three kinds of seals on these metrics, we first de termined the homogeneity of variances and then As shown in Fig. 9, ourunedited method is better than conducted the ANOVA test and robust equality of the GAN. We get more stable and more aesthetically means tests. For p < 0.05, we use Welch and for pleasing results. There are too many structural er p > 0.05, we use the ANOVA test. As shown in rors and fusion of neighboring strokes in the GAN Table 3, the results indicated a significant difference results. The same character can have many different between the three kinds of seals. glyphs, and they are difficult to pair at the pixel level Then we conducted multiple comparisons as

Truth

GAN

Ours

Fig. 9 Experiment results. GAN, generative adversarial network 12 Zhang et al. / Front Inform Technol Electron Eng in press

Table 2 Descriptive statistics

Aesthetic Visual Texture Stroke Stroke Variation of Space Overall harmony balance roughness spacing uniformity strokes distribution aesthetics GAN 2.574 2.746 2.850 2.358 2.366 - - 2.459 I Ours 5.520 5.448 5.057 5.480 5.326 - - 5.548 Truth 4.986 5.086 4.642 4.957 4.953 - - 5.022 GAN 4.150 3.978 4.383 4.006 3.983 4.011 3.922 4.000 II Ours 5.106 5.161 4.933 5.372 5.417 5.072 5.067 5.461 Truth 5.117 5.250 4.983 5.267 5.267 5.044 5.028 5.267

Table 3 ANOVA analysis results

Aesthetic Visual Texture Stroke Stroke Variation of Space Overall harmony balance roughness spacing uniformity strokes distribution aesthetics I 687.464∗∗∗ 600.434∗∗∗ 214.835∗∗∗ 686.366∗∗∗ 533.84∗∗∗ -- 749.338∗∗∗ II 31.533∗∗∗ 90.802∗∗∗ 9.336∗∗∗ 75.105∗∗∗ 71.685∗∗∗ 77.405∗∗∗ 38.356∗∗∗ 43.332∗∗∗ <∗p 0.050, ∗ ∗ p < 0.010, ∗ ∗ ∗p < 0.001

Table 4 Multiple comparisons

Aesthetic Visual Texture Stroke Stroke Variation of Space Overall harmony balance roughness spacing uniformity strokes distribution aesthetics I Ours-GAN 2.946∗∗∗ 2.706∗∗∗ 2.208∗∗∗ 3.122∗∗∗ 2.961∗∗∗ - - 3.090∗∗∗ I Ours-truth 0.534∗∗∗ 0.362∗∗ 0.416∗∗ 0.523∗∗∗ 0.372∗∗ - - 0.527∗∗∗ II Ours-GAN 0.956∗∗∗ 1.183∗∗∗ 0.550∗∗∗ 1.367∗∗∗ 1.433∗∗∗ 1.061∗∗∗ 1.144∗∗∗ 1.461∗∗∗ II Ours-truth -0.011 -0.089 -0.050 0.106 0.150 0.028 0.039 0.194 <∗p 0.050, ∗ ∗ p < 0.010, ∗ ∗ ∗p < 0.001 shown in Table 4. For nonspecialists, our results sealcarving to design an integrated intelligent seal are better than the GAN and the truth. Our re carving system, as depicted in Fig. 10. The system sults are also better than the GAN for professionals, was launched in June 2020, and the “AI sealcarving but there are no significantunedited differences between our experience activity” was organized, which was used results and the facts. by school students to generate their seals and achieve Separately comparing the 27 seals in Fig. 9, initial results. nonspecialists believes that our seals Nos. 1 and 5 Given that the users are mostly nonspecialists are worse than truth and Nos. 2 and 4 are better. and operations such as printing are inconvenient to On the other hand, professionals think that Nos. 1, use on mobile terminals, our team adopted a more 7, and 9 are worse and Nos. 2, 3, 4, and 6 are better. intelligent and easytouse method to design the sys In summary, our methods are better than the tem. The process involves three stages: customizing GAN without a doubt. The inspiring fact is that the seal characters and style, adjusting interactively, nonspecialists think our seals are better than true and waiting for carving. In the first stage, the user seals and the professionals believe our seals and true enters the content text and selects the corresponding seals have no significant differences. style. The system extracts the related seal char acters from our database and utilizes the average 8 System layout method to obtain the preliminary results. In the second stage, borders, stroke thickness, rounded After preliminary exploration of the intelli corners, character structures, margins, etc., can be gent generation of sealcarving art, we integrated adjusted. On the last process, the user submits the the steps of construction and retrieval of the seal sealcarving and waits for the sealcarving machine carving database, smart generation of seal charac to complete. ters, the smart layout of characters on the seal, and To combine the seal stamps with the sealing Zhang et al. / Front Inform Technol Electron Eng in press 13

᮷ᆇ޵ᇩ ᇊࡦঠ䶒᮷ᆇо仾Ṭ 㠚ᇊѹ䈳ᮤ 䈭䗉ޛᛞ㾷ᇐ࡬Ⲻᮽᆍ 䶗᭻ॠ᯦_ ⲳᮽদ ᵧᮽদ Ṽ㓵 人㿾φ䶗᭻ॠ᯦  ㅊ⭱㋍㓼 ঠㄐ⍱⍮仾Ṭ ഼䀈 ᶴᮙ 䰪䳏 䗯䐓 ൽ㺗

᭯ 䠃㖤৸ᮦؤঠ ≹ᵡ᮷ 㓶ⲭ᮷ ┑ⲭ᮷ 㠠⭧⦹≸ ᴪཐ伄Ṳθᮢ䈭ᵕᖻέ 㕠ㇼ㔉ᶺᮦᦤ䜞࠼ᶛ㠠㾵߭দ⽴㕌Ʌㇼࡱᑮ⭞ᆍᆍޮɆ сж↛ рж↛ сж↛

Fig. 10 Integrated intelligent system for seal-carving art generation machine, we used ArtCAM to convert the image to dataset, and extracted seal character knowledge and toolpaths used for numerical control engraving (Yin layout knowledge from it. Then, guided by the et al., 2020). Each generated seal stamp was saved as seal character knowledge, we proposed a generation an image and uploaded to the serverside. The upper method for seal characters. Guided by hierarchical computer of the numerical control (CNC) engraving visual understanding, we proposed a deformation al system then downloaded the image of the seal and gorithm. With the help of the layout knowledge converted it to toolpaths in Gcode. The conversion and deformation algorithm, we achieved an intelli was done by ArtCAM, a popular computeraided gent layout. Finally, a layout of a standard seal manufacturing tool. With the Gcode instructions script can become a usable seal after the generation sent from the upper computer via the USB serial of seal characters and layout. In this study, we pro port, the CNC engraving machine started to engrave vided a reference for computerart intelligent gener the seal design on a stone, under the control of the ation and support for the study of Chinese charac lower programmable logic controller in the system. ters, especially ancient characters. Clear goals and As an attempt to promote the culture of seal requirements may impose new conditions on technol carving to the public and allow them to experience ogy, and current mainstream technologies may face the art of sealcarving, this event achieved great suc challenges. On the other hand, technological inno cess and provided a reference for cultural develop vation may certainly bring new experiences to aes ment in the field of Chinese characterunedited art. Moreover, thetics and promote the inheritance and spread of as an essential research object in Chinese characters, culture. The perfect combination of technology and sealcarving has a significant cultural and academic art was our ultimate goal. value. As a product of ancient symbols and modern There is also much weakness of our work. For computer science and technology, the intelligent in example, although the use of skeletons improves sta tegrated sealcarving system demonstrates the aux bility, it also restricts the variety of seals. A possible iliary ability of computer science in the study of art. way to resolve this is to train a new model to transfer the skeleton between different styles. Now we have excellent results on the Han Yin style, but many 9 Conclusions styles are valuable to generate. To create great seal styles, we need to expand our datasets and use visual Chinese seal carving, as an ancient traditional knowledge more effectively. culture, deserves to be carried forward. Therefore, the intelligent generation of Chinese seal carving is Compliance with ethics guidelines a critical study that will improve the efficiency and quality of sealcarving art creation and will make seal All authors declare that they have no conflict of inter- carving more accessible to people. This paper pro est. poses a pipeline for the intelligent generation of Chi References nese seal carving guided by visual knowledge. First, Baxter B, Scheib V, Lin MC, et al., 2001. DAB: Interactive th we constructed the Qiushi sealcarving database, in haptic painting with 3D virtual brushes. Proc 28 cluding the seal character dataset and seal stamp Annual Conf on Computer Graphics and Interactive 14 Zhang et al. / Front Inform Technol Electron Eng in press

Techniques, p.461-468. Lee J, 1999. Simulating oriental black-ink painting. IEEE https://doi.org/10.1145/383259.383313 Comput Graph Appl, 19(3):74-81. Bi XF, Tang M, Lin JZ, et al., 2003. An experience based https://doi.org/10.1109/38.761553 virtual brush model. J Comput Res Dev, 40(8):1244- Lee J, 1999. Simulating oriental black-ink painting. IEEE 1251 (in Chinese). Comput Graph Appl, 19(3):74-81. Chang B, Zhang Q, Pan SY, et al., 2018a. Generat- https://doi.org/10.1109/38.761553 ing handwritten Chinese characters using CycleGAN. Leung H, 2004. Analysis of traditional Chinese seals and IEEE Winter Conf on Applications of Computer Vi- synthesis of personalized seals. IEEE Int Conf on sion, p.199-207. Multimedia and Expo, p.1283-1286. https://doi.org/10.1109/WACV.2018.00028 https://doi.org/10.1109/ICME.2004.1394458 Chang J, Gu YJ, Zhang Y, et al., 2018b. Chinese handwrit- Li Gangtian MaS, 2009. Seal Cutting. Jiangsu Education ing imitation with hierarchical generative adversarial Publishing House. network. British Machine Vision Conf 2018, article Li W, Song YP, Zhou CL, 2014. Computationally evaluating 290. and synthesizing Chinese calligraphy. Neurocomputing, Chen YS, 1995. Computer processing on the identification of 135:299-305. rd a Chinese seal image. Proc 3 Int Conf on Document https://doi.org/10.1016/j.neucom.2013.12.013 Analysis and Recognition, p.422-425. Lian ZH, Xiao JG, 2012. Automatic shape morphing for https://doi.org/10.1109/ICDAR.1995.599027 Chinese characters. SIGGRAPH Asia 2012 Technical Chen YS, 1996. Automatic identification for a Chinese seal Briefs, p.1-4. image. Pattern Recognit, 29(11):1807-1820. https://doi.org/10.1145/2407746.2407748 https://doi.org/10.1016/0031-3203(96)00032-5 Lian ZH, Zhao B, Chen XD, et al., 2018. EasyFont: a style Chongqing: Chongqing press, 1999. China’s seal style series. learning-based system to easily build your large-scale http://dh.ersjk.com/ handwriting fonts. ACM Trans Graph, 38(1):6. Chu NSH, Tai CL, 2004. Real-time painting with an expres- https://doi.org/10.1145/3213767 sive virtual Chinese brush. IEEE Comput Graph Appl, Lin JW, Wang CY, Ting CL, et al., 2014. Font generation 24(5):76-85. https://doi.org/10.1109/MCG.2004.37 of personal handwritten Chinese characters. Proc SPIE th Chu NSH, Tai CL, 2005. MoXi: real-time ink dispersion 9069, 5 Int Conf on Graphic and Image Processing, in absorbent paper. ACM SIGGRAPH 2005 Papers, article 90691T. https://doi.org/10.1117/12.2050128 p.504-511. https://doi.org/10.1145/1186822.1073221 Lin JW, Hong CY, Chang RI, et al., 2015. Complete font Dong J, Xu M, Pan YH, 2008. Statistic model-based generation of Chinese characters in personal handwrit- th simulation on calligraphy creation. Chin J Comput, ing style. IEEE 34 Int Performance Computing and 31(7):1276-1282 (in Chinese). Communications Conf, p.1-5. https://doi.org/10.3321/j.issn:0254-4164.2008.07.023 https://doi.org/10.1109/PCCC.2015.7410321 Fan TJ, Tsai WH, 1984. Automatic Chinese seal identifica- Liu J, 2010. Zhuang Ke Chang Yong Zi Zi Dian. Xiling Seal tion. Comput Vis, Graph, Image Process, 25(3):311- Engraver’s Society Publishing House. 330. https://doi.org/10.1016/0734-189X(84)90198-1 Lu JW, Barnes C, DiVerdi S, et al., 2013. RealBrush: FontLab, 2020. FontLab 7. https://www.fontlab.com/ painting with examples of physical media. ACM Trans Founder Group, 2020. FounderType.unedited Graph, 32(4):117. http://www.foundertype.com/ https://doi.org/10.1145/2461912.2461998 Girshick RB, 2004. Simulating Chinese brush painting: Lyu PY, Bai X, Yao C, et al., 2017. Auto-encoder guided the parametric hairy brush. ACM SIGGRAPH 2004 GAN for Chinese calligraphy synthesis. 14th IAPR Int Posters, article 22. Conf on Document Analysis and Recognition, p.1095- https://doi.org/10.1145/1186415.1186442 1100. https://doi.org/10.1109/ICDAR.2017.181 Glyphs, 2020. Glyphs. https://glyphsapp.com/ Mi XF, Xu J, Tang M, et al., 2002. The droplet virtual brush Gu ZL, 2013. Eight Lectures on the Basis of Seal Cutting. th for Chinese calligraphic character modeling. Proc 6 Painting and Calligraphy Publishing House, IEEE Workshop on Applications of Computer Vision, Shanghai, China (in Chinese). p.330-334. https://doi.org/10.1109/ACV.2002.1182203 Guo QL, Kunii TL, 1991. Modeling the diffuse paintings of Pan YH, 2019. On visual knowledge. Front Inf Technol ’Sumie’. Proc IFIP WG 5.10 Working Conf, p.329-338. Electron Eng, 20(8):1021-1025. https://doi.org/10.1007/978-4-431-68147-2_21 https://doi.org/10.1631/FITEE.1910001 High-Logic, 2020. High-Logic. https://www.high-logic.com/ Pan YH, 2021. Miniaturized five fundamental issues about vi- Isola P, Zhu JY, Zhou TH, et al., 2017. Image-to-image trans- sual knowledge. Front Inf Technol Electron Eng, 22(5): lation with conditional adversarial networks. IEEE 615-618. https://doi.org/10.1631/FITEE.2040000 Conf on Computer Vision and Pattern Recognition, Qiu X, Matto GL, Norman J, 2000. Wen tzu hsüeh kai yao. p.5967-5976. https://doi.org/10.1109/CVPR.2017.632 Society for the Study of Early China: Institute of East Jiang Y, Lian ZH, Tang YM, et al., 2017. DCFont: an end-to- Asian Studies, University of California. end deep Chinese font generation system. SIGGRAPH Rewrite, 2020a. Rewrite. Asia 2017 Technical Briefs, article 22. https://github.com/kaonashi-tyc/Rewrite https://doi.org/10.1145/3145749.3149440 Saito S, Nakajima M, 1999. 3D physics-based brush model Jiang Y, Lian ZH, Tang YM, et al., 2019. SCFont: structure- for painting. ACM SIGGRAPH 99 Conf Abstracts and guided Chinese font generation via deep stacked net- Applications, article 226. works. Proc AAAI Conf Artif Intell, 33(1):4015-4022. https://doi.org/10.1145/311625.312110 https://doi.org/10.1609/aaai.v33i01.33014015 Zhang et al. / Front Inform Technol Electron Eng in press 15

Shi C, Xiao JG, Xu CH, et al., 2014. Automatic generation of Xu SH, Jiang H, Jin T, et al., 2009. Automatic generation Chinese character using features fusion from calligraphy of Chinese calligraphic writings with style imitation. and font. Proc SPIE 9012, The Engineering Reality of IEEE Intell Syst, 24(2):44-53. Virtual Reality, article 90120N. https://doi.org/10.1109/MIS.2009.23 https://doi.org/10.1117/12.2038945 Xu YX, 2007b. The Research and Implementation of Chinese Strassmann S, 1986. Hairy brushes. ACM SIGGRAPH Calligraphy Tablet Generation Techniques. MS Thesis, Comput Graph, 20(4):225-232. Zhejiang University, Hangzhou, China. https://doi.org/10.1145/15886.15911 Yin YH, Chen ZW, Zhao YJ, et al., 2020. Automated Su CL, 2007a. Edge distance and gray level extraction Chinese seal carving art creation with AI assistance. and orientation invariant transform for Chinese seal IEEE Conf on Multimedia Information Processing and recognition. Appl Math Comput, 193(2):325-334. Retrieval, p.394-395. https://doi.org/10.1016/j.amc.2007.03.061 https://doi.org/10.1109/MIPR49039.2020.00086 Su CL, 2007b. Ring-to-line mapping and orientation invari- Yu JH, Peng QS, 2005. Realistic synthesis of cao shu of ant transform for Chinese seal character recognition. Chinese calligraphy. Comput Graph, 29(1):145-153. Int J Comput Math, 84(1):11-22. https://doi.org/10.1016/j.cag.2004.11.013 https://doi.org/10.1080/00207160701197967 Yu JH, Zhang JD, Cong YQ, 1996. A physically-based Sun DY, Ren TZ, Li CX, et al., 2017. Learning to write brush-pen model. J Comput-Aid Desig Comput Graph, stylized Chinese characters by reading a handful of ex- 8(4):241-245 (in Chinese). amples. https://arxiv.org/abs/1712.06424v3 https://doi.org/10.3321/j.issn:1003-9775.1996.04.001 Tang SS, Xia ZQ, Lian ZH, et al., 2019. FontRNN: generating Yu K, 2010. Researches on Some Key Technologies of Com- large-scale Chinese fonts via recurrent neural network. puter Calligraphy. PhD Dissemination, Zhejiang Uni- Comput Graph Forum, 38(7):567-577. versity, Hangzhou, China (in Chinese). https://doi.org/10.1111/cgf.13861 Yuan R, 1979. The combination of Han Yin Fen Yun. Unicode Consortium, 2020. Roadmap to the TIP. Shanghai Ancient Book Bookstore. https://unicode.org/roadmaps/tip/ Zhang JS, 2019. A survey of digital calligraphy. Scientia Wang L, 1980. Manuscript of Chinese History. Zhong Hua Sinica Informationis, 49(2):143-158 (in Chinese). Book Company, Beijing, China (in Chinese). Zhang YX, Zhang Y, Cai WB, 2018. Separating style and Wang YG, Pang YJ, 1986. CCC—computer Chinese callig- content for generalized style transfer. IEEE/CVF Conf raphy system. Inf Control, (2):38-43 (in Chinese). on Computer Vision and Pattern Recognition, p.8447- Wen C, Chang J, Zhang Y, et al., 2019. Handwritten Chinese 8455. https://doi.org/10.1109/CVPR.2018.00881 font generation with collaborative stroke refinement. Zheng ZZ, Zhang FY, 2018. Coconditional autoencoding https://arxiv.org/abs/1904.13268. adversarial networks for Chinese font feature learning. Wong HTF, Ip HHS, 2000. Virtual brush: a model-based syn- https://arxiv.org/abs/1812.04451 thesis of Chinese calligraphy. Comput Graph, 24(1):99- Zhou BY, Wang WH, Chen ZH, 2011. Easy generation of 113. https://doi.org/10.1016/S0097-8493(99)00141-7 personal Chinese handwritten fonts. IEEE Int Conf on Xu SH, Tang M, Lau F, et al., 2002. A solid model based Multimedia and Expo, p.1-6. virtual hairy brush. Computunedited Graph Forum, 21(3):299- https://doi.org/10.1109/ICME.2011.6011892 308. https://doi.org/10.1111/1467-8659.00589 zi2zi, 2020. zi2zi. https://github.com/kaonashi-tyc/zi2zi Xu SH, Jiang H, Lau FCM, 2007. An intelligent system Zong A, Zhu YK, 2014. StrokeBank: automating person- for Chinese calligraphy. Proc 22nd National Conf on alized Chinese handwriting generation. Twenty-Sixth Artificial Intelligence, p.1578-1583. IAAI Conf.