Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About Me
This is a page not in th emain menu
Short description of portfolio item number 1
Short description of portfolio item number 2
Neural Computation, 2020
Shiyu Duan, Shujian Yu, Yunmei Chen, Jose C. Principe, Neural Computation, 2020 [pdf][code]
TL;DR: (1) We propose a new family of connectionist models powered by kernel machines (think of these models as cousins of deep neural networks); (2) We propose a greedy, layer-wise training algorithm and prove its optimality in certain settings (it does what backpropagation does and does it as well as backpropagation but in a layer-by-layer fashion). The advantages? Mainly, the architecture can be modularized and is easier to tune since the user now has more information about training quality of the hidden layers and so on.
IEEE Transactions on Neural Networks and Learning Systems, 2021
Shiyu Duan, Shujian Yu, Jose C. Principe, IEEE Transactions on Neural Networks and Learning Systems, 2021 [pdf][code]
TL;DR: Using a simple trick, we reveal the kernel machines hidden inside your favorite neural networks. Based on this observation, we propose a provably optimal modular training framework for neural networks in classification, making possible fully modular deep learning workflows. Our training method does not need between-module propagation and relies almost completely on weak pairwise labels yet still matches end-to-end backpropagation in accuracy. Finally, we demonstrate that a modular workflow naturally provides simple but reliable solutions to long-standing problems in important domains such as transfer learning.
Journal of Machine Learning Research, 2021
Shiyu Duan, Spencer Chang, Jose C. Principe, Journal of Machine Learning Research, 2023 [pdf]
TL;DR: We propose training classifiers with a novel form of labeled data that is easier to obtain but is just as informative. This new form of labeled data, which we call sufficiently-labeled data, also naturally provides protection for user privacy.
IEEE Computational Intelligence Magazine, 2022
Shiyu Duan, Jose C. Principe, IEEE Computational Intelligence Magazine, 2022 [pdf]
TL;DR: We review popular provably optimal methods for training deep architectures without end-to-end backpropagation.
IEEE Transactions on Image Processing, 2022
Shiyu Duan, Huaijin Chen, Jinwei Gu, IEEE Transactions on Image Processing, 2022 [pdf][code]
TL;DR: We propose a generic GAN-based framework that enables existing image compression codecs to leverage high-level semantics. We then show that, thanks to the use of semantics, these “semantically-enhanced” codecs produce more visually pleasing results, enable downstream machine learning algorithm to perform significantly better, and achieve favorable rate-distortion performance when compared to the originals.
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.