ContextSeg: Sketch Semantic Segmentation by Querying the Context with Attention
Jiawei Wang1,   Changjian Li2
1Shandong University    2University of Edinburgh
CVPR 2024
Paper teaser
Fig. 1. Given an input sketch, semantic segmentation is to assign labels to strokes based on their semantics so as to form semantic groups. Our method is robust to stroke variations achieving superior results (e.g., the correctly labeled airplane windows).
Abstract
Sketch semantic segmentation is a well-explored and pivotal problem in computer vision involving the assignment of pre-defined part labels to individual strokes. This paper presents ContextSeg -- a simple yet highly effective approach to tackling this problem with two stages. In the first stage, to better encode the shape and positional information of strokes, we propose to predict an extra dense distance field in an autoencoder network to reinforce structural information learning. In the second stage, we treat an entire stroke as a single entity and label a group of strokes within the same semantic part using an auto-regressive Transformer with the default attention mechanism. By group-based labeling, our method can fully leverage the context information when making decisions for the remaining groups of strokes. Our method achieves the best segmentation accuracy compared with state-of-the-art approaches on two representative datasets and has been extensively evaluated demonstrating its superior performance. Additionally, we offer insights into solving part imbalance in training data and the preliminary experiment on cross-category training, which can inspire future research in this field.
  Paper [ArXiv]
Code and Data [GitHub]
Citation:
Jiawei Wang, Changjian Li. "ContextSeg: Sketch Semantic Segmentation by Querying the Context with Attention." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024. (bibtex)

Algorithm
Fig. 2. Overview of ContextSeg. Given an input sketch, it is first divided into a sequence of strokes, which are used to train our stroke embedding network – an autoencoder with an extra distance field output (Sec. 3.1). Then, the learned embeddings are sent to the segmentation Transformer operating in an auto-regressive manner (Sec. 3.2). The Transformer leverages contextual information, encompassing previously labeled strokes and remaining strokes, as input for the current step’s stroke labeling.
Results
Result Gallery
Fig. 3. Visual comparison with three competitors on the SPG and the CreativeSketch datasets.
Statistical Comparison
Stroke Embedding Evaluation
Fig. 4. Sketch reconstruction results of our ablation study on different stroke embedding networks.

 
©Changjian Li. Last update: March, 2024.