Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
forecasting:learn_forecasting [2016/02/02 09:17]
atasato created
forecasting:learn_forecasting [2021/09/19 21:59] (current)
Line 26: Line 26:
   * When the gradient is 0, the function given has the least amount of error.   * When the gradient is 0, the function given has the least amount of error.
     * Catch: just because gradient = 0 doesn'​t mean that it's a //global// minimum!     * Catch: just because gradient = 0 doesn'​t mean that it's a //global// minimum!
-  * **We can solve for the zero-point of the gradient using calculus and the partial derivative ​across ​''​x''​ or the ''​x1,​ x2, x3, etc..''​ if there is more than one feature you use to find ''​y''​.**+  * **We can solve for the zero-point of the gradient using calculus and the partial derivative ​of ''​J''​ with respect to ''​x''​ or each of ''​x1,​ x2, x3, etc..''​ if there is more than one feature you use to find ''​y''​.**
  
 == Review == == Review ==
Line 63: Line 63:
 Coming soon... Coming soon...
  
-===== Hierarchical Clustering =====+===== Hierarchical Clustering ​/ Hierarchical Clustering Analysis ​=====
  
-Coming soon...+We should cover SLINK and CLINK, algorithms that turn it from a regular O(n^3) or even O(2^n) to O(n^2) time complexity. 
 + 
 +===== Density-based spatial clustering of applications with noise (DBSCAN) ===== 
 + 
 +This has been called an award-winning clustering method based on a densityFrom Wikipedia. 
 + 
 +===== Ordering points to identify the clustering structure (OPTICS) ===== 
 + 
 +A better alternative to DBSCAN so it says on Wikipedia.
  
 ===== Non-negative Matrix Factorization (NMF) ===== ===== Non-negative Matrix Factorization (NMF) =====
Line 86: Line 94:
  
 Coming soon... Coming soon...
 +
 +===== Distance Types =====
 +
 +Euclidean, Manhattan, Mahalanobis distance.
 +Perhaps briefly mention string-distance algorithms (for text and stuff).
 +
 +===== Bayesian Statistics & Cause and Effect =====
 +
 +IMPORTANT!!!
  
 ===== Neural Networks ===== ===== Neural Networks =====
Line 98: Line 115:
  
 Coming soon... Coming soon...
 +
 +===== Pseudotime (???) =====
 +
 +e.g. DeLorean, Monocle as applied to these dataset--take a reduced dimensionality graph minimum spanning tree, plot the longest path through it, this path represents a nice progression that can be thought of a varying along a "​pseudotime"​ variable related to the change in expression of features as it goes along.
 +
 +Probably not worth looking at.
 +
 +===== Final Remarks =====
 +
 +Weather prediction seems to be needed to be solved by some application of Bayesian statistics--it'​s a bit shallow to assume that the features that we possess are all that affects the weather--however,​ it is also bad to challenge Occam'​s Razor, the principle that simpler models are better. Other scientists have utilized this heuristic in order to produce good theories (quantum mechanics, relativity, etc.). However, the weather is clearly not so easy to solve (weather forecasts can still be off sometimes, right?) and through some preliminary research, it may have to do with chaos theory.
 +
 +Chaos theory deals with systems that are probably not linear, and behave more like the cryptography hashes--small changes in the initial/​input state result in greatly different behaviors. Apparently the weather works like this too. But, what if the problem is that there are, in fact, many different factors that include the final, observed features in a non-linear fashion? Perhaps the final solution isn't going to be linear, but still perhaps predictable with the correct linear model.
 +
 +It's clear that linear models will not work for weather prediction, especially in the event of an unusual event, such as a storm, hurricane, or even a tsunami. So our endgame is going to end up here, I'm guessing.
  
  
  
  • forecasting/learn_forecasting.1454404677.txt.gz
  • Last modified: 2021/09/19 21:59
  • (external edit)