MLP with backpropogation

This is a discussion on MLP with backpropogation within the General AI Programming forums, part of the Cprogramming.com and AIHorizon.com's Artificial Intelligence Boards category; i am designing MLP with back propogation algo with rhoe as a learning factor i need information regarding it how ...

  1. #1
    Registered User
    Join Date
    Jul 2008
    Posts
    2

    MLP with backpropogation

    i am designing MLP with back propogation algo with rhoe as a learning factor i need information regarding it how it helps in learning m using it for classifying heart sounds should i give all wts by myself aur take it from data randomly

  2. #2
    Registered User
    Join Date
    Jul 2008
    Posts
    2
    do reply me urgently as i have to appy in ma final year project

  3. #3
    and the hat of wrongness Salem's Avatar
    Join Date
    Aug 2001
    Location
    The edge of the known universe
    Posts
    32,755
    http://www.catb.org/~esr/faqs/smart-questions.html
    Bzzzt - urgency
    Bzzzt - not using google
    If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
    If at first you don't succeed, try writing your phone number on the exam paper.
    I support http://www.ukip.org/ as the first necessary step to a free Europe.

  4. #4
    Malum in se abachler's Avatar
    Join Date
    Apr 2007
    Posts
    3,189
    not familiar with rhoe and a google didnt turn up much on the first page, sorry not my project not my responsibility to do the research, so perhaps you could elaborate and/or spell out what it stands for. Never assume that Everyone, even experts, are familiar with every acronym you use in your little niche of the field. I have worked for 10 different companies and all 10 used different terms for the same things.

    As for the initial weights, it is prefered to use random initial weights.

    As for training, that depends on your training methodology. Do you use localized training methods, or apply the global error . Using global error, there is no need to sequence the trainign examples. If you use localized training, i.e. test then apply the error, you need to randomly shuffle the examples after every epoch
    Last edited by abachler; 07-22-2008 at 03:45 PM.
    Until you can build a working general purpose reprogrammable computer out of basic components from radio shack, you are not fit to call yourself a programmer in my presence. This is cwhizard, signing off.

Popular pages Recent additions subscribe to a feed

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21