160 likes | 256 Views
Dense Distributed Representations. Form of Distributed Representations. ویژگی ها. ا طلاعات بدست آمده از یک واحد جداگانه. شرکت هر نرون یا واحد در مفاهیم زیادی اطلاعات کم بدست آمده از هر نرون یا واحد. Dense Distributed Representations. blur. blur. برحس بیشتریت نرون فعال. Blur blue.
E N D
Dense Distributed Representations • Form of Distributed Representations • ویژگی ها • اطلاعات بدست آمده از یک واحد جداگانه شرکت هر نرون یا واحد در مفاهیم زیادی اطلاعات کم بدست آمده از هر نرون یا واحد
Dense Distributed Representations blur blur برحس بیشتریت نرون فعال Blur blue blue
Dense Distributed Representations Seidenberg and McClelland(1989) 200 لایه پنهان بعد از آموزش 97.3% تطابق pint, mint, and said.
Dense Distributed Representations Seidenberg and McClelland(1989) • 22 لایه پنهان فعال pint کلمات یکسان الگو یکسان تداخل بالا کلمات غیر یکسان الگو متفاوت تداخل کم کلمات مشابه 11 نرون مشترک 50%همپوشانی کلمات غیر مشابه 4 نرون مشترک 18% همپوشانی
Dense Distributed Representations ازفعال بودن یک نرون نتیجه زیادی نمی توان گرفت 2897 داده ورودی 2000 کلمه غیر یکسان 360 کلمه 18%همپوشانی 2000*0.18
Dense Distributed Representations • اگر همپوشانی کلمات غیر یکسان خیلی کم باشد 2897 داده ورودی 2000 کلمه غیر یکسان 1%همپوشانی 20 کلمه 0.01*2000
Berkeley, Dawson,Medler, Schopflocher, and Hornsby (1995). • استفاده از نمودار پراکندگی jittered density plots • تصدیق استفاده از dense distributed representation
The plots of the first 24 (out of 200) hidden units are presented in Figure 5A. As is clear from these plots (and equally true of the remaining plots), it is not possible to interpret the output of any given hidden unit, as each unit responds to many different letters. For the model to correctly retrieve a single letter (let alone a list of 6 letters in STM), the model must rely on a pattern of activation across a set of units. That is, the model has learned to support STM on the basis of dense distributed representations. In another analysis, I trained the same network to name a set of 275 monosyllable words presented one at a time. That is, rather than supporting STM, the model learned to name single words. Each input and output was a pattern of activation over three letter units, and the model was trained to reproduce the input pattern at the output layer. After training, it succeeded (_100%) on both trained words and 275 unfamiliar words. Figure 5B presents the jittered density plots of the first 24 hidden units in response to the 275 familiar words. Once again, the model succeeded on the basis of dense distributed representations. These analyses support Seidenberg and McClelland’s (1989) earlier conclusion.
Bowers, Damian, and Davis (2008) recently carried out a set of these analyses on a PDP model of short-term memory (STM) and word naming. The simulations were based on Botvinick and Plaut’s (2006) model of STM that includes a localist input and output unit for each letter and 200 hidden units. After training, the model could reproduce a series of 6 random letters at _50% accuracy, which roughly matches human performance in an immediate serial recall task. In one analysis, we successfully trained their model to this criterion and then computed jittered density plots for all the hidden units in response to all 26 letters (the scatter plots were constructed in response to single letters rather than lists of letters).
Sparse Distributed Coding • کد کردن یک مفهوم پیچیده با تعداد کمی نرون • هر نرون در تعداد محدودی از مفاهیم شرکت می کند
ویژگی ها • سرعت یادگیری بالا • ضعف در تعمیم
تفاوت sparse و dense • تعداد نرون ها • قدرت تعمیم • سرعت یادگیری
تفاوت sparse و coarse • تعداد نرون ها • قدرت تعمیم • ساختار سلسله مراتبی