Author: statcompute

Transpose in Clojure

(require ‘[huri.core :as h] ‘[clojure.core.matrix.dataset :as d] ‘[incanter.core :as i]) ;; FROM MAP OF ROWS TO MAP OF COLUMNS (def byRow [{:x 1 :y “a”} {:x 2 :y “b”} {:x...continue reading.

Clojure Integration with R

(require ‘[tnoda.rashinban :as rr] ‘[tnoda.rashinban.core :as rc] ‘[clojure.core.matrix.dataset :as dt] ‘[clojure.core.matrix.impl.dataset :as id]) ;; CREATE A TOY DATA (def ds [{:id 1.0 :name “name1”} {:id 2.0 :n…continue reading.

Parse CSV File with Headers in Clojure

; (defproject prj “0.1.0-SNAPSHOT” ; :dependencies [[org.clojure/clojure “1.8.0”] ; [org.clojars.bmabey/csvlib “0.3.6”] ; [ultra-csv “0.2.1”] ; [incanter “1.5.7”] ; [semantic-csv “0.2.1-alpha1”] ; [org.clojure/data.csv “0.1.4”]]) (require ‘[csvlib :…continue reading.

For Loop and Map in Clojure

;; DEFINE THE DATABASE (def db {:classname “org.sqlite.JDBC” :subprotocol “sqlite” :subname “/home/liuwensui/Downloads/chinook.db”}) ;; CALL PACKAGES (require ‘[clojure.java.jdbc :as j] ‘[criterium.core :as c] ‘[cloju…continue reading.

MLE in R

When I learned and experimented a new model, I always like to start with its likelihood function in order to gain a better understanding about the statistical nature. That’s why...continue reading.

Granular Monotonic Binning in SAS

In the post (https://statcompute.wordpress.com/2017/06/15/finer-monotonic-binning-based-on-isotonic-regression), it is shown how to do a finer monotonic binning with isotonic regression in R. Below is a SAS macro implementing the monotonic binning with the...continue reading.

Model Non-Negative Numeric Outcomes with Zeros

As mentioned in the previous post (https://statcompute.wordpress.com/2017/06/29/model-operational-loss-directly-with-tweedie-glm/), we often need to model non-negative numeric outcomes with zeros in the operational loss model development. Tweedie GLM provides a convenient interface to...continue reading.

DART: Dropout Regularization in Boosting Ensembles

The dropout approach developed by Hinton has been widely employed in deep learnings to prevent the deep neural network from overfitting, as shown in https://statcompute.wordpress.com/2017/01/02/dropout-regularization-in-deep-neural-networks. In the paper http://proceedings.mlr.press/v38/korlakaivinayak15.pdf, the...continue reading.