Lec-02 Simple Liner Regression LAB


>> Hypothesis and Cost


Hypothesis

Cost
  • で最もコストの低いWとBの値を学習と呼ぶことができます.
  • cost関数をtesorflowに移行するにはどうすればいいですか?
    x_data = [1,2,3,4,5]
    y_data = [1,2,3,4,5]
    W = tf.Variable(2.9)
    b = tf.Variable(0.5)
    hypothesis = W * x_data + b
    cost(W,b)
    cost = tf.reduce_mean(tf.square(hypothesis - y_data))
    :入力と出力xとyの値が同じ場合、W値は1、b値は0である.
    tf.reduce_mean()
    v = [1., 2., 3., 4.] 
    tf.reduce_mean(v) #2.5
    tf.square()
    tf.square(3) #9
    Gradient降下(=傾斜降下アルゴリズム)
    : minimize cost(W,b)
    : learning_rate = 0.01
    # 변수(W,b)에 대한 정보 tape에 기록_
    with tf.GradientTape() as tape: 	 
        hypothesis = W * x_data + b
        cost = tf.reduce_mean(tf.square(hypothesis - y_data))
    #기울기 값 즉 미분값을 구함._
    W_grad, b_grad = tape.gradient(cost, [W,b]) 
    W.assign_sub(learning_rate * W_grad)
    b.assign_sub(learning_rate * b_grad)
  • A.assign_sub(B)
    : A = A - B
    : A-=B
  • Full Code
    import tensorflow as tf
    #Data
    x_data = [1,2,3,4,5]
    y_data = [1,2,3,4,5]
      
    # W, b initialize
    W = tf.Variable(2.9)
    b = tf.Variable(0.5)
      
    learning_rate = 0.01
      
    for i in range(100+1): # W, b update
    # Gradient descent
        with tf.GradientTape() as tape:
            hypothesis = W * x_data + b
            cost = tf.reduce_mean(tf.square(hypothesis - y_data))
        W_grad, b_grad = tape.gradient(cost, [W,b])
        W.assign_sub(learning_rate * W_grad)
        b.assign_sub(learning_rate * b_grad)
        if i % 10 == 0:
            print("{:5}|{:10.4f}|{:10.4}|{:10.6f}".format(i, W.numpy(), b.numpy(), cost))
    Collabを使用して実行されたコード結果

    すべての人のために準備した第2四半期-Tensorflow