• myGriffith
    • Staff portal
    • Contact Us⌄
      • Future student enquiries 1800 677 728
      • Current student enquiries 1800 154 055
      • International enquiries +61 7 3735 6425
      • General enquiries 07 3735 7111
      • Online enquiries
      • Staff phonebook
    View Item 
    •   Home
    • Griffith Research Online
    • Journal articles
    • View Item
    • Home
    • Griffith Research Online
    • Journal articles
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

  • All of Griffith Research Online
    • Communities & Collections
    • Authors
    • By Issue Date
    • Titles
  • This Collection
    • Authors
    • By Issue Date
    • Titles
  • Statistics

  • Most Popular Items
  • Statistics by Country
  • Most Popular Authors
  • Support

  • Contact us
  • FAQs
  • Admin login

  • Login
  • Parameter-Efficient Deep Neural Networks With Bilinear Projections

    Author(s)
    Yu, Litao
    Gao, Yongsheng
    Zhou, Jun
    Zhang, Jian
    Griffith University Author(s)
    Zhou, Jun
    Yu, Litao
    Gao, Yongsheng
    Zhang, Jian
    Year published
    2020
    Metadata
    Show full item record
    Abstract
    Recent research on deep neural networks (DNNs) has primarily focused on improving the model accuracy. Given a proper deep learning framework, it is generally possible to increase the depth or layer width to achieve a higher level of accuracy. However, the huge number of model parameters imposes more computational and memory usage overhead and leads to the parameter redundancy. In this article, we address the parameter redundancy problem in DNNs by replacing conventional full projections with bilinear projections (BPs). For a fully connected layer with D input nodes and D output nodes, applying BP can reduce the model space ...
    View more >
    Recent research on deep neural networks (DNNs) has primarily focused on improving the model accuracy. Given a proper deep learning framework, it is generally possible to increase the depth or layer width to achieve a higher level of accuracy. However, the huge number of model parameters imposes more computational and memory usage overhead and leads to the parameter redundancy. In this article, we address the parameter redundancy problem in DNNs by replacing conventional full projections with bilinear projections (BPs). For a fully connected layer with D input nodes and D output nodes, applying BP can reduce the model space complexity from O(D²) to O(2D), achieving a deep model with a sublinear layer size. However, the structured projection has a lower freedom of degree compared with the full projection, causing the underfitting problem. Therefore, we simply scale up the mapping size by increasing the number of output channels, which can keep and even boosts the model accuracy. This makes it very parameter-efficient and handy to deploy such deep models on mobile systems with memory limitations. Experiments on four benchmark data sets show that applying the proposed BP to DNNs can achieve even higher accuracies than conventional full DNNs while significantly reducing the model size.
    View less >
    Journal Title
    IEEE Transactions on Neural Networks and Learning Systems
    DOI
    https://doi.org/10.1109/tnnls.2020.3016688
    Subject
    Nanotechnology
    Publication URI
    http://hdl.handle.net/10072/397271
    Collection
    • Journal articles

    Footer

    Disclaimer

    • Privacy policy
    • Copyright matters
    • CRICOS Provider - 00233E

    Tagline

    • Gold Coast
    • Logan
    • Brisbane - Queensland, Australia
    First Peoples of Australia
    • Aboriginal
    • Torres Strait Islander