Publications

JPD-SE: High-Level Semantics for Joint Perception-Distortion Enhancement in Image Compression

IEEE Transactions on Image Processing, 2022

Shiyu Duan, Huaijin Chen, Jinwei Gu, IEEE Transactions on Image Processing, 2022 [pdf][code]

TL;DR: We propose a generic GAN-based framework that enables existing image compression codecs to leverage high-level semantics. We then show that, thanks to the use of semantics, these “semantically-enhanced” codecs produce more visually pleasing results, enable downstream machine learning algorithm to perform significantly better, and achieve favorable rate-distortion performance when compared to the originals.

Labels, Information, and Computation: Efficient Learning Using Sufficient Labels

Journal of Machine Learning Research, 2021

Shiyu Duan, Spencer Chang, Jose C. Principe, Journal of Machine Learning Research, 2023 [pdf]

TL;DR: We propose training classifiers with a novel form of labeled data that is easier to obtain but is just as informative. This new form of labeled data, which we call sufficiently-labeled data, also naturally provides protection for user privacy.

Modularizing Deep Learning via Pairwise Learning With Kernels

IEEE Transactions on Neural Networks and Learning Systems, 2021

Shiyu Duan, Shujian Yu, Jose C. Principe, IEEE Transactions on Neural Networks and Learning Systems, 2021 [pdf][code]

TL;DR: Using a simple trick, we reveal the kernel machines hidden inside your favorite neural networks. Based on this observation, we propose a provably optimal modular training framework for neural networks in classification, making possible fully modular deep learning workflows. Our training method does not need between-module propagation and relies almost completely on weak pairwise labels yet still matches end-to-end backpropagation in accuracy. Finally, we demonstrate that a modular workflow naturally provides simple but reliable solutions to long-standing problems in important domains such as transfer learning.

On Kernel Method-Based Connectionist Models and Supervised Deep Learning Without Backpropagation

Neural Computation, 2020

Shiyu Duan, Shujian Yu, Yunmei Chen, Jose C. Principe, Neural Computation, 2020 [pdf][code]

TL;DR: (1) We propose a new family of connectionist models powered by kernel machines (think of these models as cousins of deep neural networks); (2) We propose a greedy, layer-wise training algorithm and prove its optimality in certain settings (it does what backpropagation does and does it as well as backpropagation but in a layer-by-layer fashion). The advantages? Mainly, the architecture can be modularized and is easier to tune since the user now has more information about training quality of the hidden layers and so on.