논문 리뷰 16

[논문 리뷰] Meta의 Neural Prophet, NeuralProphet: Explainable Forecasting at Scale

[2111.15397] NeuralProphet: Explainable Forecasting at Scale  NeuralProphet: Explainable Forecasting at ScaleWe introduce NeuralProphet, a successor to Facebook Prophet, which set an industry standard for explainable, scalable, and user-friendly forecasting frameworks. With the proliferation of time series data, explainable forecasting remains a challenging taskarxiv.org 오늘 다룰 논문은 Meta에서 공개한 Ner..

논문 리뷰 2025.03.12

[논문리뷰]AR-Net: A simple Auto-Regressive Neural Network for time-series(2019)

[1911.12436] AR-Net: A simple Auto-Regressive Neural Network for time-series AR-Net: A simple Auto-Regressive Neural Network for time-seriesIn this paper we present a new framework for time-series modeling that combines the best of traditional statistical models and neural networks. We focus on time-series with long-range dependencies, needed for monitoring fine granularity data (e.g. minutes,ar..

논문 리뷰 2025.03.03

[논문 리뷰] LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation

논문 제목: LightGCN: Simplifying and Powering Graph Convolution Network for RecommendationLightGCN: Simplifying and Powering Graph Convolution Network for... LightGCN: Simplifying and Powering Graph Convolution Network for RecommendationGraph Convolution Network (GCN) has become new state-of-the-art for collaborative filtering. Nevertheless, the reasons of its effectiveness for recommendation are no..

논문 리뷰 2024.08.30

[논문 리뷰] Finetuned Language Models Are Zero-Shot Learners

Finetuned Language Models Are Zero-Shot Learners Finetuned Language Models Are Zero-Shot LearnersThis paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot perarxiv.org 제안 모델: FLAN배경선행 모델의 문제점제안 모델: 선행 문제..

논문 리뷰 2024.08.30

[논문 리뷰] BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer

논문 제목: BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from TransformerBERT4Rec: Sequential Recommendation with Bidirectional Encoder... BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from TransformerModeling users' dynamic and evolving preferences from their historical behaviors is challenging and crucial for recommendation systems...

논문 리뷰 2024.08.05

[논문 리뷰] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

논문 제목: An Image is Worth 16x16 Words: Transformers for Image Recognition at ScaleAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale An Image is Worth 16x16 Words: Transformers for Image Recognition at ScaleWhile the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision..

논문 리뷰 2024.07.19

[논문 리뷰] Attention Is All You Need

Attention Is All You Need  Attention Is All You NeedThe dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a newarxiv.org  Transformer   배경기존 모델 한계 소개한계점을 극복한 제안 모델 소개 제안모델Transformer 모델 아키텍처Multi-Head s..

논문 리뷰 2024.07.17

[논문리뷰] Segment Any Anomaly without Training viaHybrid Prompt Regularization

Cao, Yunkang, et al. "Segment any anomaly without training via hybrid prompt regularization." arXiv preprint arXiv:2305.10724 (2023). [2305.10724] Segment Any Anomaly without Training via Hybrid Prompt Regularization (arxiv.org) Segment Any Anomaly without Training via Hybrid Prompt Regularization We present a novel framework, i.e., Segment Any Anomaly + (SAA+), for zero-shot anomaly segmentatio..

논문 리뷰 2024.03.19